Can set-theoretic mereology serve as a foundation of mathematics?
ABSTRACT. Abstract: Mereology, the study of the relation of part to whole, is often contrasted with set theory and its membership relation, the relation of element to set. Whereas set theory has found comparative success in the foundation of mathematics, since the time of Cantor, Zermelo and Hilbert, mereology is strangely absent. Can a set-theoretic mereology, based upon the set-theoretic inclusion relation ⊆ rather than the element-of relation ∈, serve as a foundation of mathematics? Can we faithfully interpret arbitrary mathematical structure in terms of the subset relation to the same extent that set theorists have done so in terms of the membership relation? At bottom, the question is: can we get by with merely ⊆ in place of ∈? Ultimately, I shall identify grounds supporting generally negative answers to these questions, concluding that set-theoretic mereology by itself cannot serve adequately as a foundational theory.
Previous ERC panel member Atocha Aliseda will present her experiences from serving on the panel and on this basis provide advice to future applicants.
Grant recipients Tarja Knuuttila and Barbara Osimani will participate in the session and will answer questions from the audience on her experiences as applicant and reciptient.
The study of sets of real numbers and their structural properties is one of the central topics of contemporary set theory and the focus of the set-theoretic disciplines of descriptive set theory and set theory of the reals. The Baire space consists of all functions from the set of natural numbers to itself. Since this space is Borel-isomorphic to the real line and has a very accessible structure, it is one of the main tools of descriptive set theory. Because a great variety of mathematical objects can be canonically represented as subsets of Baire space, techniques from descriptive set theory and set theory of the reals can be applied throughout mathematics. These applications are limited to the study of objects of cardinality at most the size of the continuum. Therefore, the question whether similar methods can be applied in the analysis of larger objects arose naturally in several areas of mathematics and led to a strongly increasing interest in the study of higher Baire spaces, i.e., higher analogues of Baire space which consist of all functions from a given uncountable cardinal to itself.
In the recent years, an active and steadily growing community of researches has initiated the development of higher analogues of descriptive set theory and set theory of the reals for higher Baire spaces, turning this area of research into one of the hot topics of set theory. Results in this area provide a rich and independent theory that differs significantly from the classical setting and gives new insight into the nature of higher cardinals. The proofs of these results combine concepts and techniques from different areas of set theory: combinatorics, forcing, large cardinals, inner models and classical descriptive set theory. Moreover, they also use methods from other branches of mathematical logic, like model theory and the study of strong logics. In the other direction, these results have been applied to problems in other fields of mathematical logic and pure mathematics, like the classification of non-separable topological spaces, the study of large cardinals and Shelah's classification theory in model theory.
These developments have been strongly supported by regular meetings of the research community. The community met first at the Amsterdam Set Theory Workshop in 2014, then at a satellite workshop to the German mathematics congress in Hamburg in 2015, at a workshop at the Hausdorff Center for Mathematics in Bonn in 2016, and at the KNAW Academy Colloquium in Amsterdam in 2018.
The increased significance of the study of higher Baire spaces has been reflected through these meetings by both strongly growing numbers of attendees and a steadily increasing percentage of participants from other fields of set theory. The Symposium on higher Baire spaces will provide the opportunity to reunite this community a year after the last meeting.
Chair:
Philipp Lücke (Mathematisches Institut, Universität Bonn, Germany)
Dorottya Sziráki (Alfréd Rényi Institute of Mathematics, Hungarian Academy of Sciences, Hungary)
The Open Dihypergraph Dichotomy for Definable Subsets of Generalized Baire Spaces
ABSTRACT. The open graph dichotomy for a given set $X$ of reals [1] is a generalization of the perfect set property for $X$ which can also be viewed as a definable version of the Open Coloring Axiom restricted to $X$. Feng [1] showed that it is consistent relative to the existence of an inaccessible cardinal that the open graph dichotomy holds for all sets $X$ of reals that are definable from a countable sequence of ordinals. In [2], higher dimensional versions of the open graph dichotomy are introduced and several well-known dichotomy theorems for the second level of the Borel hierarchy are obtained as special cases of the $\omega$-dimensional version.
We study the uncountable analogues of the open graph dichotomy and its higher dimensional versions for the generalized Baire space ${}^\kappa\kappa$. Given a subset $X$ of ${}^\kappa\kappa$ and a set $D$ of size at least 2, we let $OGD(\kappa,D,X)$ denote the following statement:
if $H$ is a $D$-dimensional box-open dihypergraph on $X$ then either $H$ has a coloring with $\kappa$ many colors, or there exists a continuous homomorphism from a certain specific "large" $D$-dimensional dihypergraph to $H$. (When $D=2$, the existence of such a continuous homomorphism is equivalent to the existence of a $\kappa$-perfect subset $Y$ of $X$ such that the restriction of $H$ to $Y$ is a complete graph.)
We extend Feng's above mentioned theorem to the generalized Baire space ${}^\kappa\kappa$ and also obtain higher dimensional versions of this result. Namely, we show that for any infinite cardinal $\kappa$ with $\kappa^{<\kappa} = \kappa$, the following statements are consistent relative to (and are therefore equiconsistent with) the existence of an inaccessible cardinal above $\kappa$.
1. $OGD(\kappa,D,X)$ holds for all sets $D$ of size $2\leq |D|<\kappa$ and all subsets $X$ of ${}^\kappa\kappa$ which are definable from a $\kappa$-sequence of ordinals.
2. If a subset $X$ of ${}^\kappa\kappa$ is definable from a $\kappa$-sequence of ordinals, then $\OGD(\kappa,\kappa,X)$ holds restricted to a certain class of $\kappa$-dimensional box-open dihypergraphs $H$ on $X$. This class includes those box-open dihypergraphs $H$ on $X$ which are definable by $\Sigma^1_1$-formulas over ${}^\kappa\kappa$.
[1] Q. Feng, Homogeneity for open partitions of pairs of reals, Trans. Amer. Math. Soc., 339(2):659--684, 1993.
[2] R. Carroy, B.D. Miller, and D.T. Soukup, The open dihypergraph dichotomy and the second level of the Borel hierarchy, submitted.
ABSTRACT. Let $\lambda$ be an uncountable regular cardinal. We say that the \emph{tree property} holds at $\lambda$, and write $\TP(\lambda)$, if every $\lambda$-tree has a cofinal branch (equivalently, there are no $\lambda$-Aronszajn trees). Recently, there has been extensive research which studies the extent of the tree property at multiple cardinals, with the ultimate goal of checking whether it is consistent that the tree property can hold at every regular cardinal greater than $\aleph_1$. The method of proof is typically based on lifting a certain elementary embedding with critical point $\lambda$ which witnesses the strength of $\lambda$ and on application of criteria for a forcing notion adding or not adding new cofinal branches to \emph{existing} $\lambda$-Aronszajn trees. With such criteria in place, an iteration can be defined which in effect kills all potential $\lambda$-Aronszajn trees, and thus forces the tree property.
In this talk we study a related question and search for criteria for forcing notions adding or not adding \emph{new} $\lambda$-Aronszjan trees (we will always consider forcings which preserve $\lambda$ as a regular cardinal to avoid trivialities). To give our study more specific focus, we will focus on cardinals of the form $\lambda = \kappa^{++}$ where $\omega \le \kappa$ is a regular cardinal. We aim to identify a class $\I$ (``$\mathscr{I}$'' for ``indestructible'') of forcing notions which is as inclusive as possible and which consists of forcing notions which -- at least consistently over some model $V^*$ -- do not add new $\kappa^{++}$-Aronszajn trees. Since typically $\TP(\kappa^{++})$ will hold in $V^*$, we will say that $\TP(\kappa^{++})$ is \emph{indestructible in $V^*$ with respect to forcing notions in $\I$}. We will consider ``Mitchell-style'' models $V^*$ which seem to provide the greatest level of generality for a description of a typical ``Mitchell-style'' model).
After discussing the extent of $\I$, reviewing the construction of the model and comparing to existing models, we will suggest applications for manipulating the generalized cardinal invariants using the presented method.
Easton's function and the tree property below $\aleph_\omega$
ABSTRACT. It is known that the usual large cardinals (we include the assumption
of inaccessibility in the definition of a large cardinal) do not have
any effect on the continuum function on small cardinals: in
particular, if $\kappa$ is a large cardinal, all we can say in
general is that $2^{\aleph_n}$, for any $n<\omega$, is smaller than
$\kappa$. Things may be different if we consider a cardinal $\kappa$
which shares some properties with large cardinals (typically some sort
of reflection), but it is not inaccessible: it may even be smaller
than $\aleph_\omega$, and therefore may have an effect on the
continuum function below $\aleph_\omega$.
In this talk, we are interested in the \emph{tree property} at
$\kappa$: we say that a regular cardinal $\kappa>\aleph_0$ has the
tree property if there are no $\kappa$-Aronszajn trees ($\kappa$-trees
without a cofinal branch). It is known that if $2^\kappa = \kappa^+$,
then there are $\kappa^{++}$-Aronszajn trees. Thus the tree property
at $\aleph_2$ implies the failure of CH. It is natural to ask whether
the tree property at $\kappa^{++}$ puts more restrictions on the
continuum function apart from requiring $2^\kappa > \kappa^+$. We
discuss this problem with the focus on the cardinals below
$\aleph_\omega$ and show that the tree property on every $\aleph_n$,
$2 \le n < \omega$, is compatibile with any continuum function on the
$\aleph_n$'s wich complies with the restriction $2^{\aleph_n} >
\aleph_{n+1}$, $n<\omega$. Our result provides a generalization of
Easton's theorem for the context of compactness principles at
successor cardinals.
At the second part of the talk we will discuss extensions of our
result to other widely studied compactness principles such as
\emph{stationary reflection} and the failure of the
\emph{approachability property} and also mention connections to
generalized cardinal invariants.
Glivenko’s theorem from 1929 says that if a propositional formula is provable in classical logic, then its double negation is provable within intutionistic logic. Soon after, Gödel extended this to predicate logic, which requires the double negation shift. As is well-known, with the Gödel-Gentzen negative translation in place of double negation one can even get by with minimal logic. Several related proof translations saw the light of the day, such as Kolmogorov’s and Kuroda’s.
Glivenko’s theorem thus stood right at the beginning of a fundamental change of perspective: that classical logic can be embedded into intuitionistic or minimal logic, rather than the latter being a diluted version of the former. Together with the revision of Hilbert Programme ascribed to Kreisel and Feferman, this has led to the quest for the computational content of classical proofs, today culminating in agile areas such as proof analysis, dynamical algebra, program extraction from proofs and proof mining. The considerable success of these approaches suggests that classical mathematics will eventually prove much more constructive than widely thought still today.
Important threads of current research include the following:
1. Exploring the limits of Barr’s theorem about geometric logic
2. Program extraction in abstract structures characterised by axioms
3. Constructive content of classical proofs with Zorn’s Lemma
4. The algorithmic meaning of programs extracted from proofs
ABSTRACT. For several years we have studied interrelations between logics by analysing translations between them. The first known ‘translations’ concerning classical logic, intuitionistic logic and modal logic were presented by Kolmogorov (1925), Glivenko (1929), Lewis and Langford (1932), Gödel (1933) and Gentzen (1933). In 1999, da Silva, D’Ottaviano and Sette proposed a very general definition for the concept of translation between logics, logics being characterized as pairs constituted by a set and a consequence operator, and translations between logics being defined as maps that preserve consequence relations. In 2001, Feitosa and D’Ottaviano introduced the concept of conservative translation, and in 2009 Carnielli, Coniglio and D’Ottaviano proposed the concept of contextual translation. In this paper, providing some brief historical background, we will discuss the historical relevance of the ‘translation’ from classical logic into intuitionistic logic introduced by Glivenko in 1929, and will show that his interpretation is a conservative and contextual translation.
References
CARNIELLI, W. A., CONIGLIO M. E., D’OTTAVIANO, I. M. L. (2009) New dimensions on translations between logics. Logica Universalis, v. 3, p. 1-19.
da SILVA, J. J., D’OTTAVIANO, I. M. L., SETTE, A. M. (1999) Translations between logics. In: CAICEDO, X., MONTENEGRO, C.H. (Eds.) Models, algebras and proofs. New York: Marcel Dekker, p. 435-448. (Lectures Notes in Pure and Applied Mathematics, v. 203)
FEITOSA, H. A., D’OTTAVIANO, I. M. L. (2001) Conservative translations. Annals of Pure and Applied Logic. Amsterdam, v. 108, p. 205-227
GENTZEN, G. (1936) Die Widerspruchsfreiheit der reinem Zahlentheorie. Mathematische Annalen, v. 112, p. 493-565. Translation into English in Gentzen (1969, Szabo, M. E. (Ed.)).
GENTZEN, G. (1969) On the relation between intuitionist and classical arithmetic (1933). In: Szabo, M. E. (ed.) The Collected Papers of Gerhard Gentzen, p. 53-67. Amsterdam: North-Holland.
GLIVENKO, V. (1929) Sur quelques points de la logique de M. Brouwer. Académie Royale de Belgique, Bulletins de la Classe de Sciences, s. 5, v. 15, p. 183-188.
GÖDEL, K. (1986) On intuitionistic arithmetic and number theory (1933). In: FEFERMAN, S. et alii (Ed.) Collected works. Oxford: Oxford University Press, p. 287-295.
GÖDEL, K. (1986) An interpretation of the intuitionistic propositional calculus (1933). In: FEFERMAN, S. et alii (Ed.) Collected works. Oxford: Oxford University Press, p. 301-303.
KOLMOGOROV, A. N. (1977) On the principle of excluded middle (1925). In: HEIJENOORT, J. (Ed.) From Frege to Gödel: a source book in mathematical logic 1879-1931. Cambridge: Harvard University Press, p. 414-437.
LEWIS, C. I., LANGFORD, C. H. (1932) Symbolic Logic, New York (Reprinted in 1959).
A simple proof of Barr’s theorem for infinitary geometric logic
ABSTRACT. Geometric logic has gained considerable interest in recent years: con- tributions and applications areas include structural proof theory, category theory, constructive mathematics, modal and non-classical logics, automated deduction.
Geometric logic is readily defined by stating the structure of its axioms. A coherent implication (also known in the literature as a “geometric axiom”, a “geometric sentence”, a “coherent axiom”, a “basic geometric sequent”, or a “coherent formula”) is a first-order sentence that is the universal clo- sure of an implication of formulas built up from atoms using conjunction, disjunction and existential quantification. The proper geometric theories are expressed in the language of infinitary logic and are defined in the same way as coherent theories except for allowing infinitary disjunctions in the antecedent and consequent.
Gentzen’s systems of deduction, sequent calculus and natural deduction, have been considered an answer to Hilbert’s 24th problem in providing the basis for a general theory of proof methods in mathematics that overcomes the limitations of axiomatic systems. They provide a transparent analysis of the structure of proofs that works to perfection for pure logic. When such systems of deduction are augmented with axioms for mathematical theories, much of the strong properties are lost. However, these properties can be regained through a transformation of axioms into rules of inference of a suitable form. Coherent theories are very well placed into this program, in fact, they can be translated as inference rules in a natural fashion: In the context of a sequent calculus such as G3c [4, 8], special coherent implications as axioms can be converted directly [2] to inference rules without affecting the admissibility of the structural rules; This is essential in the quest of applying the methods of structural proof theory to geometric logic.
Coherent implications I form sequents that give a Glivenko class [5, 3]. In this case, the result [2], known as the first-order Barr’s Theorem (the general form of Barr’s theorem [1, 9, 6] is higher-order and includes the axiom of choice) states that if each I_i : 0 ≤ i ≤ n is a coherent implication and the sequent I_1, . . . , I_n ⇒ I_0 is classically provable then it is intuition- istically provable. By these results, the proof-theoretic study of coherent gives a general twist to the problem of extracting the constructive content of mathematical proofs.
In this talk, proof analysis is extended to all such theories by augmenting an infinitary classical sequent calculus with a rule scheme for infinitary geometric implications. The calculus is designed in such a way as to have all the rules invertible and all the structural rules admissible.
An intuitionistic infinitary multisuccedent sequent calculus is also intro- duced and it is shown to enjoy the same structural properties as the classical calculus. Finally, it is shown that by bringing the classical and intuitionistic calculi close together, the infinitary Barr theorem becomes an immediate result.
References
[1] Barr, M. Toposes without points, J. Pure and Applied Algebra 5, 265–280, 1974.
[2] Negri, S. Contraction-free sequent calculi for geometric theories, with an application
to Barr’s theorem, Archive for Mathematical Logic 42, pp 389–401, 2003.
[3] Negri, S. Glivenko sequent classes in the light of structural proof theory, Archive for
Mathematical Logic 55, pp 461–473, 2016.
[4] Negri, S. and von Plato, J. Structural Proof Theory, Cambridge University Press,
2001.
[5] Orevkov,V.P.Glivenko’ssequenceclasses,Logicalandlogico-mathematicalcalculi1, Proc. Steklov Inst. of Mathematics 98, pp 147–173 (pp 131–154 in Russian original), 1968.
[6] Rathjen, M. Remarks on Barr’s Theorem: Proofs in Geometric Theories, In P. Schus- ter and D. Probst (eds.), Concepts of Proof in Mathematics, Philosophy, and Com- puter Science. De Gruyter, pp 347–374, 2016.
[7] Skolem, T. Selected Works in Logic, J. E. Fenstad (ed), Universitetsforlaget, Oslo, 1970.
[8] Troelstra, A.S. and Schwichtenberg, H. Basic proof theory (2nd edn.). Cambridge Univ. Press, 2001.
[9] Wraith, G., Intuitionistic algebra: some recent developments in topos theory, Pro- ceedings of International Congress of Mathematics, Helsinki, pp 331–337, 1978.
ABSTRACT. A famous theorem of Barr's yields that geometric implications deduced in classical (infinitary) geometric theories also have intuitionistic proofs. Barr's theorem is of a category-theoretic (or topos-theoretic) nature. In the literature one finds mysterious comments about the involvement of the axiom of choice. In the talk I'd like to speak about the proof-theoretic side of Barr's theorem and aim to shed some light on the AC part.
Organizers: María Del Rosario Martínez Ordaz and Otávio Bueno
In their day-to-day practice, scientists make constant use of defective (false, imprecise, conflicting, incomplete, inconsistent etc.) information. The philosophical explanations of the toleration of defective information in the sciences are extremely varied, making philosophers struggle at identifying a single correct approach to this phenomenon. Given that, we adopt a pluralist perspective on this issue in order to achieve a broader understanding of the different roles that defective information plays (and could play) in the sciences.
This symposium is devoted to exploring the connections between scientific pluralism and the handling of inconsistent as well as other types of defective information in the sciences. The main objectives of this symposium are (a) to discuss the different ways in which defective information could be tolerated (or handled) in the different sciences (formal, empirical, social, health sciences, etc. ) as well as (b) to analyze the different methodological tools that could be used to explain and handle such type of information.
The symposium is divided into two parts: the first tackles the issue of inconsistency and scientific pluralism. This part includes discussions of the possible connections between the different ways in which scientist tolerate contradictions in the sciences and particular kinds of scientific pluralism. This analysis is extremely interesting in itself as the phenomenon of inconsistency toleration in the science has often been linked to the development of a plurality of formal approaches, but not necessarily to logical or scientific pluralism. In fact, scientific pluralism is independent of inconsistency toleration.
The second part of the symposium is concerned with a pluralistic view on contradictions and other defects. This part is devoted to explore under which circumstances (if any) it is possible to use the same mechanisms for tolerating inconsistencies and for dealing with other types of defective information. This part includes reflections on the scope of different formal methodologies for handling defectiveness in the sciences as well as considerations on scientific communicative practices and their connections with the use of defective information and reflections on the different epistemic commitments that scientists have towards defective information.
Inconsistency and belief revision in cases of approximative reduction and idealization
ABSTRACT. We must face with the fact that in science, as well as in ordinary life (think of certain forms of self-deception, if there really exists such a phenomenon), there are or there seem to be inconsistencies, inconsistent beliefs or inconsistent commitments. The reasons for this situation can be multiple. We can find internal inconsistencies within a theory, although these inconsistencies may be just temporal and may disappear with time as far as we are able to dissolve them in the face of new information or as a consequence of a decision. In fact, as Peter Vickers and other authors have pointed out, whether a theory contains inconsistencies or not may well depend on the way we construct or reconstruct the theory. We can find inconsistencies between a theory and the body of observational data, a very usual phenomenon that leads to the problem of how to react to the existence of anomalies. And finally, there may be inconsistencies that leap out when we try to relate a theory that is partially rejected, though is considered to be approximately correct at least in some of its parts, to a theory that is thought to be a more successful successor. The Kepler-Newton case, the Newton-Einstein relation, or the relation between classical and quantum equations, are good examples of this. It is the several ways of treating these different kinds of inconsistencies from a formal point of view, and specifically the latter ones, what I am interested in in the present contribution. It must not be that scientists, in these cases, in fact have inconsistent beliefs. It suffices that they have inconsistent commitments or assumptions, that for different reasons, they want nevertheless to maintain. In this contribution, I want to consider several proposals of dealing with these inconsistencies. In particular, I will consider AGM belief revision theory, as it has been used by Hans Rott and other authors as an adequate formal tool for cases of so-called approximative reduction containing idealization (for example, Rott applies AGM to the Kepler-Newton case). I will compare this version of belief revision theory, where the logical consequence relation is classical, with the paraconsistent belief revision theory proposed by Graham Priest. I will also compare these tools with others that make use of different alternative model-theoretic approaches. The consequences of these comparisons when applied to the analysis of historical scientific cases will provide us with interesting conclusions about the relation between consistency, rationality, and scientific progress, as well as about the different formal approaches to the reconstruction of scientific theories and the relations between them.
Logic-based ontologies in the biomedical domain: From defects to explicit contradictions
ABSTRACT. The focus of this contribution will be on theories that exhibit certain kinds of defects, but that are not explicitly inconsistent. The term "theory" will be used in a broad sense to refer to any set $\Cn{L}{\Gamma}$, where $\Gamma$ is a set of non-logical axioms and where $\Cn{L}{\Gamma}$ refers to the deductive closure of $\Gamma$ under \textbf{L}. A theory $\Cn{L}{\Gamma}$ will be called explicitly inconsistent if and only if it contains an explicit contradiction---a sentence of the form $\exists (A\wedge\neg A)$, where $\exists A$ denotes the existential closure of $A$.
Different kinds of defects will be distinguished and it will be shown under which conditions which defects will surface as explicit contradictions. I shall argue that, although there is no general methodology for "correcting" defective theories, extending defective theories in such a way that they become explicitly inconsistent can contribute in important ways to their adequate handling. A distinction will be made between two kinds of contexts where adequate handling of defective theories is needed (those in which one is forced to live with the defects and those in which one is trying to correct some of them). I shall argue that these contexts require different kinds of underlying logics, show that the adequate handling of defective theories benefits from a nonmonotonic approach, and discuss the prospects of the adaptive logics framework ([1]) for this.
I shall only look at a specific kind of theories in the health sciences, but argue that some of the conclusions may be generalized to other sciences and other domains. The kind of theories that I shall concentrate on fall under the category of logic-based ontologies. A key example of such an ontology for the health sciences is SNOMED CT (Systematized Nomenclature Of Medicine, Clinical Terms). SNOMED CT is currently considered to be the most comprehensive clinical healthcare ontology (more than 300.000 active concepts and millions of domain-specific relations between these concepts) and covers clinical findings, symptoms, diagnoses, procedures, body structures, organisms, substances, pharmaceuticals, devices, ... SNOMED CT is based on the description logic EL (a decidable, highly inexpressive fragment of first-order logic---no negation, no disjunction, and only very restricted use of the existential quantifier) and one of its main aims is to support the recording of data in the health sciences, and more specifically to provide the terminological backbone for electronic health records.
Because of the inexpressive nature of its underlying logic, SNOMED CT is not (and will never be) explicitly inconsistent. Still, SNOMED CT exhibits different kinds of defects. Based on the literature, but without aiming at an exhaustive taxonomy, I shall discuss five classes of defects---false subsumptions (IsA relations), false relations other than subsumptions (PartOf-relations, DueTo-relations, ...), structural inconsistencies, ambiguous concepts, and inconsistent concepts. I shall say that a subsumption or other relation is false when it is not compatible with accepted clinical practice and use "structural inconsistency" to refer to any violation of basic rules for the organisation of hierarchies (for instance that equivalent concepts should not have different parents). The term "ambigous concept" will be used to refer to cases where a single concept of SNOMED CT is ambiguous between two or more meanings in clinical practice and ``inconsistent concepts'' to refer to concepts that, in view of some implicit assumption behind SNOMED CT (for instance, that at each level in the taxonomy siblings exclude one another), cannot have instances. I shall discuss recent proposals to strengthen the underlying logic of SNOMED CT (see, for instance, [2]), show that such a strengthening will result in a theory that is explicitly inconsistent, discuss the advantages that this may have for its adequate handling, and consider the question which kinds of logics are most suitable for this handling.
[1] Diderik Batens. A universal logic approach to adaptive logics. Logica Universalis, 1(1):221–242, 2007.
[2] Alan L. Rector and Sebastian Brandt. Why do it the hard way? The case for an expressive description logic for SNOMED.Journal of the American Medical Informatics Association, 15(6):744–751, 2008.
12:00
Moises Macias Bustos (National Autonomous University of Mexico, University of Massachussets -Amherst, Mexico)
Lewis, Stalnaker and the Problem of Assertion & Defective Information in the Sciences
ABSTRACT. Here I will argue that some of Stalnaker's rules for speech acceptable to rational communicators exclude too much discourse where informative assertions are in fact possible, specifically scientific discourse, given impossible objects, inconsistent theories and cases of informational, metaphysical or semantic indeterminacy, which arguably are required in some of those discourses. The problem of giving an account of meaning in context is one which Lewis (1980) and Stalnaker (1978) are very much concerned to address. In what follows I give a brief summary of their positions and thence I proceed to discuss some philosophical worries in the context of understanding the linguistic phenomenon of assertion when it comes to dealing with defective (inconsistent or indeterminate) scientific information.
The main objective is to contrast Stalnaker (1978, 2003) and Lewis (1980, 1979) on content and context, specifically on whether they have the metaphysical resources to support a revision to Stalnaker’s rules of communication when it comes to modeling assertion in the context of defective scientific information. Stalnaker has two relevant rules: one says a proposition asserted is true on some but not all possible worlds in the context set, the second excludes truth value gaps i.e. a proposition must have a truth value. For the former I will argue that gluts might be needed to recover relevant discourse; for the latter, that gaps might be needed (Beall, 2010). I concentrate specifically on a discussion on the alleged inconsistency of the early calculus, In the algorithm employed for making use of this technique one of the steps is for all purposes an assertion which does not rule out every possible world incompatible with it, contra Stalnaker. The early calculus is admittedly an important mathematical theory I discuss among other examples of assertion in the context of defective (inconsistent or indeterminate) information (Brown & Priest, 2004).
I argue that while both Lewis (1986) and Stalnaker (2003) have resources to solve this. Lewis can do so without embracing primitive modality or revising logic by modifying modal realism in the sense outlined by Berto (2010) where he recovers impossible worlds in a Lewisian system and with suitable adjustments for modeling indeterminate or incomplete information. So a modification of Lewis fares better than Stalnaker’s account when it comes to salvaging these linguistic phenomena in terms of primitive ideology, since it does not require primitive modality and given the way in which rules for rational communication for assertion in the context of defective information should be modified.
Berto, F. (2010): “Impossible Worlds and Propositions: Against the Parity Thesis”, The Philosophical Quarterly, 60: 471-86.
Beall, JC, and S. Logan (2017): Logic: The Basics. Routledge.
Lewis, D. (1980): “Index, Context, and Content” in S. Kanger and S. Ohman (eds.) Philosophy and Grammar, 79-100.
-------------- (1986): "On the plurality of worlds." Oxford 14: 43.
-------------- (1979): "Scorekeeping in a language game." Semantics from different points of view: 172-187.
Stalnaker, R. (1978): “Assertion” Syntax and Semantics 9:315-332
-------------- (2003): Ways a world might be: Metaphysical and anti-metaphysical essays. Oxford University Press.
Reasoning in social context has many important aspects, one of which is the reasoning about strategic abilities of individuals (agents) and groups (coalitions) of individuals to guarantee the achievement of their desired objectiveswhile acting within the entire society. Various logical systems have been proposed for formalising and capturing such reasoning, starting with Coalition Logic (CL) and some extensions of it, introduced the early 2000s.
Coalition Logic provides a natural, but rather restricted perspective: the agents in the proponent coalition are viewed as acting in full cooperation with each other but in complete opposition to all agents outside of the coalition,which are treated as adversaries.
The strategic interaction in real societies is much more complex, usually involving various patterns combining cooperation and competition. To capture these, more expressive and refined logical frameworks are needed.
In this talk I will first present briefly Coalition Logic and then will introduce and discuss some more expressive and versatile logical systems, including:
i. the Socially Friendly Coalition Logic (SFCL), enabling formal reasoning about strategic abilities of individuals and groups to ensure achievement of their private goals while allowing for cooperation with the entire society;
ii. the complementary, Group Protecting Coalition Logic (GPCL), capturing reasoning about strategic abilities of the entire society to cooperate in order to ensure achievement of the societal goals, while simultaneously protectingthe abilities of individuals and groups within the society to achieve their individual and group goals.
Finally, I will take a more general perspective leading towards a unifying logic-based framework for strategic reasoning in social context, and will associate it with the related concepts of mechanism design (in game theory)and rational synthesis (in computer science).
Chair:
Yaroslav Shramko (Kryvyi Rih State Pedagogical University, Ukraine)
ABSTRACT. Reasoning in social context has many important aspects, one of which is the reasoning about strategic abilities of individuals (agents) and groups (coalitions) of individuals to guarantee the achievement of their desired objectiveswhile acting within the entire society. Various logical systems have been proposed for formalising and capturing such reasoning, starting with Coalition Logic (CL) and some extensions of it, introduced the early 2000s.
Coalition Logic provides a natural, but rather restricted perspective: the agents in the proponent coalition are viewed as acting in full cooperation with each other but in complete opposition to all agents outside of the coalition,which are treated as adversaries.
The strategic interaction in real societies is much more complex, usually involving various patterns combining cooperation and competition. To capture these, more expressive and refined logical frameworks are needed.
In this talk I will first present briefly Coalition Logic and then will introduce and discuss some more expressive and versatile logical systems, including:
i. the Socially Friendly Coalition Logic (SFCL), enabling formal reasoning about strategic abilities of individuals and groups to ensure achievement of their private goals while allowing for cooperation with the entire society;
ii. the complementary, Group Protecting Coalition Logic (GPCL), capturing reasoning about strategic abilities of the entire society to cooperate in order to ensure achievement of the societal goals, while simultaneously protectingthe abilities of individuals and groups within the society to achieve their individual and group goals.
Finally, I will take a more general perspective leading towards a unifying logic-based framework for strategic reasoning in social context, and will associate it with the related concepts of mechanism design (in game theory)and rational synthesis (in computer science).
ABSTRACT. Consciousness seems particularly hard to fit into our scientific worldview when we consider its subjective aspect. Neurobiological theories that account for consciousness starting from its physical substrate seem unable to explain the problem posed by experience. Why certain neural processes are accompanied by certain experiential features, while others are not? This is the Hard Problem (HP) of consciousness and any theory that attempts to explain this phenomenal feature of reality needs to address it. In this contribution, we discuss how HP affects the Integrated Information Theory (IIT), which today is regarded as one of the most prominent neurobiological theory of consciousness.
By adopting a top-down approach from phenomenology to the mechanism of consciousness, IIT starts with five axioms that characterize the essential properties of every experience (i.e. intrinsic existence, composition, information, integration, and exclusion). Then, it infers for each axiom a corresponding postulate that specifies the properties that physical systems must satisfy in order to generate consciousness. Finally, IIT holds that experience is a maximally irreducible conceptual structure (MICS), which means that there is an identity between the phenomenological properties of experience and causal properties of a physical system (Oizumi et al. 2014, Tononi 2015).
We propose our own analysis of Chalmers’ Hard Problem (Chalmers 2010, 2018), the Layered View of the Hard Problem, according to which there is a phenomenal layer and a further conceptual layer that together constitute HP. The first makes subjective experience an explanandum, generating the Monolayered Hard Problem (MHP). The second adds epistemic claims about how an explanation of experience should proceed, given the conceivability of zombies’ scenarios, thus creating the Double-Layered Hard Problem (DHP). We take DHP to be the standard Hard Problem as it is discussed in the literature (HP=DHP).
If our analysis is correct, then the relation between HP and IIT depends on the theory’s stance on conceivability scenarios and it presents four possible different outcomes. Firstly, regarding MHP, IIT takes the road of nonreductive fundamental explanation and thus can be said to indirectly attempt to solve it. Secondly, the theory directly denies that there is a DHP for it to answer to due to its methodological choice of a top-down approach. Thirdly, IIT indirectly denies that there is in general a DHP, either by allowing only for functionally, but not physically identical zombies (no conceivability), or by holding the necessary identity between an experience and its MICS (no possibility). If our argument is sound, then IIT and HP in their current state cannot be both true: one of them needs to be revised or rejected.
References
Chalmers, D. (2010). The Character of Consciousness. New York: Oxford University Press.
Chalmers, D. (2018). The Meta-Problem of Consciousness, 25(9-10), 6–61. Journal of Consciousness Studies.
Oizumi, M., Albantakis, L., & Tononi, G. (2014). From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0. PLoS Comput Biol, 10(5), e1003588. http://dx.doi.org/10.1371/journal.pcbi.1003588.
Tononi, G. (2015). Integrated information theory. Scholarpedia, 10(1), 4164. http://dx.doi.org/10.4249/scholarpedia.4164.
What is "biological" about biologically-inspired computational models in cognitive science? Implications for the multiple realisation debate
ABSTRACT. In this talk, I investigate the use of biologically-inspired computational models in cognitive science and their implications for the multiple realisation debate in philosophy of mind. Multiple realisation is when the same state or process can be realised in different ways. For example, flight is a potential multiply realised process. Birds, planes and helicopters all fly relying on the same aerodynamic principles but their mechanisms for flying differ substantially: birds have two wings which they flap in order to achieve flight, planes also have two wings, but they are static rather than flapping and helicopters use rotors on the top to produce enough lift for flight. If these “ways” of flying are considered sufficiently different, then we can conclude that flight is a multiply realised process. Philosophers of mind (such as Putnam (1967) and Fodor (1974) but more recently Polger & Shapiro (2016)) have frequently taken multiple realisation to be significant for metaphysical debates about whether mental processes can be reduced to neural processes. The idea being that if mental processes such as pain are multiple realised, then pain does not reduce to a neural process since it can be instantiated in other ways.
The current literature on multiple realisation (for example, Polger and Shapiro (2016) and Aizawa (2018a; 2018b)) doesn’t consider how artificial and engineered systems such as biologically-inspired computational models fit into this debate. I argue that the use of these models in cognitive science motivates the need for a new kind of multiple realisation, which I call ‘engineered multiple realisation’ (or EMR). By this, I mean that scientists aim to create multiple realisations of cognitive capacities (such as object recognition) through engineering systems. I describe various examples of this in cognitive science and explain how these models incorporate biological elements in different ways. Given this, I claim that EMR cannot bear on debates about the nature of mental processes. Instead, I argue that, when building computational models as EMRs, there are different payoffs for incorporating biology into the models. For example, researchers are often motivated to incorporate biological elements into their models in the hope that doing so will lead to better performance of their models. (Baldassarre et al (2017); George (2017); Laszlo & Armstrong, (2013)) Other researchers incorporate biological elements into models as a way to test hypotheses about the mechanisms underlying human vision. (Tarr & Aminoff, 2016) I emphasise that these payoffs depend on the goals of different modelling approaches and what the approaches take to be biologically relevant for these goals. By sketching out different approaches and their notions of biological relevance, I conclude that there are many important roles that EMR can play instead of informing traditional metaphysical debates about the reduction of mental to neural processes.
References:
Aizawa, K. (2018a). Multiple Realization, Autonomy, and Integration. In D. M. Kaplan (Ed.), Explanation and Integration in Mind and Brain Science (pp. 215–235). Oxford: Oxford University Press.
Aizawa, K. (2018b). Multiple realization and multiple “ways” of realization: A progress report. Studies in History and Philosophy of Science Part A, 68, 3–9. https://doi.org/10.1016/j.shpsa.2017.11.005
Baldassarre, G., Santucci, V. G., Cartoni, E., & Caligiore, D. (2017). The architecture challenge: Future artificial-intelligence systems will require sophisticated architectures, and knowledge of the brain might guide their construction. Behavioral and Brain Sciences, 40, 25–26.
Fodor, J. A. (1974). Special Sciences (Or: The Disunity of Science as a Working Hypothesis). Synthese, 28(2), 97–115.
George, D. (2017). What can the brain teach us about building artificial intelligence? Behavioral and Brain Sciences, 40, 36–37.
Laszlo, S., & Armstrong, B. C. (2013). Applying the dynamics of post-synaptic potentials to individual units in simulation of temporally extended ERP reading data. Proceedings of the Annual Meeting of the Cognitive Science Society, 35(35).
Polger, T. W., & Shapiro, L. A. (2016). The multiple realization book. Oxford: Oxford University Press.
Putnam, H. (1967). Psychological Predicates. In W. H. Capitan & D. D. Merrill (Eds.), Art, mind and religion. Pittsburgh: University of Pittsburgh Press.
Tarr, M. J., & Aminoff, E. M. (2016). Can big data help us understand human vision? In M. N. Jones (Ed.), Big Data in Cognitive Science (pp. 343–363). New York: Routledge.
Turing Redux: An Enculturation Account of Calculation and Computation
ABSTRACT. Calculation and other mathematical practices play an important role in our cognitive lives by enabling us to organise and navigate our socio-cultural environment. Combined with other mathematical practices, calculation has led to powerful scientific methods that contribute to an in-depth understanding of our world. In this talk, I will argue that calculation is a practice that relies on the embodied manipulation of numerical symbols. It is distributed across the brain, the rest of the body, and the socio-cultural environment.
Phylogenetically, calculation is the result of concerted interactions between human organisms and their socio-cultural environment. Across multiple generations, these interactions have led from approximate quantity estimations and object-tracking to the cumulative cultural evolution of discrete, symbol-based operations. Ontogenetically, the acquisition of competence in calculation is the result of enculturation. Enculturation is a temporally extended process that usually leads to the acquisition of culturally, rather than biologically, evolved cognitive practices (Fabry, 2017; Menary, 2015). It is associated with plastic changes to neural circuitry, action schemata, and motor programs.
With these considerations in place, I will describe the recent cognitive history of computation. Based on Turing’s (1936) seminal work on computable numbers, computation can be characterised as a specific type of calculation. Computational systems, I will show, are hybrid systems, because they are realised by the swift integration of embodied human organisms and cognitive artefacts in different configurations (Brey, 2005). Classically, computations are realised by enculturated human organisms that bodily manipulate numerical symbols using pen and paper. Turing’s (1936) work built on this observation and paved the way towards the design of digital computers. The advent of digital computers has led to an innovative way to carry out hybrid computations: enculturated human organisms can now complete complex computational tasks by being coupled to computational artefacts, i.e., digital computers. Some of these tasks would be very difficult or even impossible to complete (e.g., in statistical data analysis) if human organisms were denied the use of digital computational artefacts.
In sum, in this talk I will argue that computation, understood as a specific kind of calculation, is the result of enculturation. Historically, enculturated computation has enabled the development and refinement of digital computers. These digital computers, in turn, help enculturated human organisms complete computational tasks, because they can be recruited as a reliable component of coupled hybrid human-machine computational systems. These hybrid systems promise to lead to further improvements of digital computational artefacts in the foreseeable future.
References
Brey, P. (2005). The epistemology and ontology of human-computer interaction. Minds and Machines, 15(3–4), 383–398.
Fabry, R. E. (2017). Cognitive innovation, cumulative cultural evolution, and enculturation. Journal of Cognition and Culture, 17(5), 375–395. https://doi.org/10.1163/15685373-12340014
Menary, R. (2015). Mathematical cognition: A case of enculturation. In T. Metzinger & J. M. Windt (Eds.), Open MIND (pp. 1–20). Frankfurt am Main: MIND Group. https://doi.org/10.15502/9783958570818
Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 42, 230–265.
Inductive Method, or the Experimental Philosophy of the Royal Society
ABSTRACT. Itis widely assumed by students of scientific method that a method of induction is logically impossible. The method of hypothesis seems therefore to be the only possible method open to science. Under the influence of Henri Poincaré, this makes certain theoretical truths conventional, and under the influence of Pierre Duhem physical theory seems always underdetermined by experimental facts, for logical reasons. Karl Popper and Willard Quine have also provided interesting variations on this influential analysis, extending it to all knowledge claims. Even those others who have supported induction in science have supported it in a supplementary role for evaluating hypotheses in a generally hypothetico-deductive scheme. I suggest that these opinions are mistaken.
There is a method. in a new 17th century form of induction. or of an experimental philosophy, that was proposed by Francis Bacon and adopted by the Royal Society, which is not undercut by these logical considerations. The key to this method of modern science was that it relied on recording erroneous judgments rather than on error-free judgement as its basis for performing inductions. It uses skeptical techniques to assemble erroneous judgments into “natural histories,” under Francis Bacon’s description, producing records of a ”cross examination of nature.” They yield, by a clever use of experiment, clusters of mutually inconsistent answers concerning what is apparently true. A collection of such experimental anomalies, when it is extensive enough, makes for a puzzle that admits of at most a unique solution. We may regard such unique solutions, when they are found by a result of “error analysis,” as real knowledge to the extent that nothing else can fit the facts. The facts, of course, are the factual errors that are recorded in the Baconian or experimental natural history as apparently true.
Because solutions are unique to each puzzle, they cannot count merely as hypotheses, precisely because nothing else can fit the facts. Unlike hypothetico-deductive analysis, inductive error analysis admits of no alternative to the solved puzzle.
The new method of modern science consists in writing up the experimental record carefully. Each experimental phenomenon must be recorded with a detailed account of the conditions of its appearance. An inductive solution found by error analysis is a model of the reality underlying puzzling appearances. A carefully written experimental record of the conditions under which erroneous experimental appearances are to be found gives us a new kind of “power over nature,” in the following way. When we have hit upon the unique model that works, the model indicates how experimental conditions yield apparent errors. The conditions for production may now be called “causal” conditions, and the errors, their “effects.” This is a “pledge,” as Bacon says, of truth. What we have is not merely a matter of words, as in the hypothetico-deductive method, but we know that we “have got hold of the thing itself” that we can now manipulate.
Bilancie giuste a posta per chiarire questa verità. The importance of instrument in Guidobaldo dal Monte’s Le mechaniche
ABSTRACT. It is conventionally used to identify the beginning of the modern science with the scientific activity of Galileo Galilei.
Nevertheless, as is known thanks to copious studies about the Mathematics of the Renaissance, lots of intuitions of the Pisan ‘scientist’ were consequence of a lively scientific debate and a cultural milieu that marked the Sixteenth Century.
Among characteristics of modern science, surely the employ of the instrument to prove a theory was one of the most important. However the protagonists of Sixteenth Century had already gained a certain awareness about the useful of instrument to do science and as a good argument to defend their own thesis.
In this paper, I would like to show how into the controversy about the equilibrium conditions of a scale, a debate that involved the main mathematicians of the time, Guidobaldo dal Monte, the patron of Galileo, often used experiments and instruments to prove the indifferent equilibrium. This approach is really evident in Le mechaniche dell'illustriss. sig. Guido Ubaldo de' Marchesi del Monte: Tradotte in volgare dal sig. Filippo Pigafetta (1581), namely the Italian translation of Mechanicorum Liber (1577), the first printed text entirely dedicated to mechanics.
References:
Becchi, A., Bertoloni Meli, D., Gamba, E. (eds), Guidobaldo dal Monte (1545-1607). Theory and Practice of the Mathematical Disciplines from Urbino to Europe, Max Planck Research Library for the History and Development of Knowledge, Berlin, Edition Open Access.
Drake, S., Drabkin, I. E., 1969, Mechanics in Sixteenth-Century Italy. Selections from Tartaglia, Benedetti, Guido Ubaldo, e Galileo, Madison-Milwaukee-London, The University of Wisconsin Press.
Favaro, A., 1899-1900, «Due lettere inedite di Guidobaldo del Monte a Giacomo Contarini», Atti del Reale Istituto Veneto di Scienze, Lettere ed Arti, LIX, 307-310.
Frank, M., 2011, Guidobaldo dal Monte's Mechanics in Context. A Research on the Connections between his Mechanical Work and his Biography and Environment, Ph.D. Thesis.
Galilei, G., 1965, Le opere di Galileo Galilei, 20 vols., Favaro A. (ed), Firenze, Barbera.
Gamba, E., 2001, «Le scienze fisiche e matematiche dal Quattrocento al Seicento», Pesaro nell’età dei Della Rovere, 3 vols., Venezia, Marsilio, vol. II, pp. 87–103.
Gamba, E., Montebelli, V., Le scienze a Urbino nel tardo Rinascimento, Urbino, QuattroVenti.
Guidobaldo dal Monte, 1577, Mechanicorum Liber, Pesaro.
Pigafetta, F., 1581, Le mechaniche dell'illustriss. sig. Guido Ubaldo de' Marchesi del Monte: Tradotte in volgare dal sig. Filippo Pigafetta, Venezia.
Renn, J. (ed.), 2001, Galileo in Context, Cambridge, Cambridge University Press.
Renn, J., Damerow, P, 2010, Guidobaldo dal Monte's Mechanicorum Liber, Max Planck Research Library for the History and Development of Knowledge, Berlin, Edition Open Access.
Renn, J., Damerow, P., 2012, The Equilibrium Controversy. Guidobaldo del Monte’s Critical Notes on the Mechanics of Jordanus and Benedetti and their Historical and Conceptual Background, Max Planck Research Library for the History and Development of Knowledge, Berlin, Edition Open Access.
Rose, P. L., 1975, The Italian Renaissance of Mathematics, Genèvre, Droz.
Pablo Lorenzano (National University of Quilmes/CONICET, Argentina)
Laws, Causation and Explanations in Classical Genetics: A Model-theoretic Account
ABSTRACT. The aim of this communication is to analyze the kind of explanations usually given in Classical Genetics. Explanations in biology have intriguing aspects to both biologists and philosophers. A summary of these aspects are found in the introduction to the anthology Explanation in Biology: An Enquiry into the Diversity of Explanatory Patterns in the Life Sciences (Braillard & Malaterre 2015):
We will outline four of the most salient problems in the current debate. These problems are related to (1) whether natural laws exist in biology, (2) whether causation plays a specific explanatory role in biology, (3) whether other forms of explanation – e.g., functional or teleological – are also needed, and (4) whether the recent mechanistic type model of explanation that brings together some form of law-like generalizations and of causation fulfill all expectations. (p. 9)
With our analysis of explanations in Classical Genetics the last problem, which relates to the first two ones, will be addressed straightforward. But instead of doing it with “the recent mechanistic type model of explanation”, it will be done with a model-theoretic, structuralist account of explanation.
First, explanations in Classical Genetics will be presented in the traditional format of explanations as summarized by arguments.
Later on, the nature of these explanations will be discussed by using explanations in another area of science, namely, Classical Mechanics.
To clarify the situation, and to carry out an analysis of explanations in Classical Genetics, notions of the structuralist view of theories ‒ especially those of theory-net, fundamental law (or guiding principle), specialization, and special law ‒ will be applied to Classical Genetics. In this application, Classical Genetics’ fundamental law/guiding principle will be made explicit.
Next, in order to make more transparent the ontological commitments of Classical Genetics (some of which would play a causal role), explanations will be presented in a model-theoretic, structuralist format as ampliative embeddings into nomic patterns within theory-nets.
Finally, it will conclude with a discussion of the presented analysis, arguing in favor of the model-theoretic, structuralist account of explanation “that brings together some form of law-like generalizations and of causation”.
References
Balzer, W., Moulines, C.U. and J. Sneed (1987), An Architectonic for Science. The Structuralist Program, Dordrecht: Reidel, 1987.
Braillard, P.-A. and C. Malaterre (eds.) (2015), Explanation in Biology: An Enquiry into the Diversity of Explanatory Patterns in the Life Sciences, Netherlands: Springer.
Carnap, R. (1950), “Empiricism, Semantics and Ontology”, Revue Internationale de Philosophie 4: 20-40.
Díez, J.A. (2014), “Scientific w-Explanation as Ampliative, Specialized Embedding: A Neo-Hempelian Account”, Erkenntnis 79(8): 1413-1443.
Reutlinger, A. (2014), “The Generalizations of Biology: Historical and Contingent?”, in M.I. Kaiser et al. (eds.), Explanation in the Special Sciences: The Case of Biology and History, Dordrecht: Springer, pp. 131-153.
Woodward, J. (2001), “Law and Explanation in Biology: Invariance Is the Kind of Stability That Matters”, Philosophy of Science 68(1): 1-20.
Woodward, J. (2010), “Causation in Biology: Stability, Specificity, and the Choice of Levels of Explanation”, Biology & Philosophy 25(3): 287-318.
Waters, K, (2007), “Causes that Make a Difference”, Journal of Philosophy CIV: 551-579.
11:30
Jinyeong Gim (Seoul National University, South Korea)
Category Theory as a Formal Language of the Mechanistic Philosophy
ABSTRACT. In this paper, I aim to recommend category theory to mechanists in the philosophy of science, particularly as a mathematical formal language for some basic ideas in the mechanistic philosophy. Category theory was introduced to capture commonly preserving structures between different mathematical objects. (MacLane 1998) I firstly show that category theory can be employed to formalize the fundamental characteristics of biological mechanisms such ontological dualism (entities and activities), the hierarchical relation between a mechanism and its components, and spatiotemporally organizational features of mechanisms. (Machamer, Darden, and Craver 2000; Bechtel and Abrahamsen 2005) Further, I am arguing that category theory is a more promising formal framework in that it easily overcomes some skeptical views of the so-called a quantitative approach based on Bayesian nets or causal graphs theory. (Casini et al. 2011; Gebharter 2014; Gebharter and Schurz 2016; Casini 2016) Proponents of this quantitative approach emphasize that their tool is useful to capture two characteristics of mechanisms, hierarchy and causal connectivity of components of the mechanisms. However, there had been some critics to point out that this approach fails to provide causal processes that involve complex spatial and chemical-structural relations (Kaiser 2016) and to defend the property of modularity. (Weber 2016) In contrast, Category theory admits to formalize spatial and chemical information by assigning morphisms from an object to another. And, each category is not only discrete but also connected by functors so that category theory provides us a way to grasp modularity. By Applying category theory to the mechanism of protein synthesis I will defend a new qualitative formal framework.
References
Bechtel, W., and Abrahamsen, A. (2005). ‘Explanation: A Mechanist Alternative,’ Studies in History and Philosophy of Biological and Biomedical Sciences, 36(2), 421–441.
Casini, L. (2016). ‘How to Model Mechanistic Hierarchies,’ Philosophy of Science, 83, 946-958.
Casini, L., lllari, P. M., Russo, F., & Williamson, J. (2011). ‘Models for Prediction, Explanation and Control: Recursive Bayesian Networks,’ Theoria – An International Journal for Theory, History and Foundations of Science, 26(70), 5–33.
Gebharter, A. (2014). ‘A Formal Framework for Representing Mechanisms?,’ Philosophy of Science, 81(1), 138-153.
Gebharter, A. and Schurz, G. (2016). ‘A Modeling Approach for Mechanisms Featuring Causal Cycles,’ Philosophy of Science, 83, 934-945.
Kaiser, M. (2016). ‘On the Limits of Causal Modeling: Spatially-Structurally Complex Biological Phenomena,’ Philosophy of Science, 83, 921-933.
Machamer, P., L. Darden and C. F. Craver (2000) ‘Thinking about Mechanisms,’ Philosophy of Science 67: 1–25.
Mac Lane, Saunders (1998). Categories for the Working Mathematician. Graduate Texts in Mathematics, 5 (2nd ed.). Springer-Verlag.
Weber, M. (2016). ‘On the Incompatibility of Dynamical Biological Mechanisms and Causal Graphs,’ Philosophy of Science, 83, 959-971.
12:00
Gregor Greslehner (ImmunoConcept, CNRS & University of Bordeaux, France)
What is the explanatory role of the structure-function relationship in immunology?
ABSTRACT. According to a common slogan in the life sciences, "structure determines function". While both 'structure' and 'function' are ambiguous terms that denote conceptionally different things at various levels of organization, the focus has traditionally been on the three-dimensional shapes of individual molecules, including molecular patterns that play a central role in immunology. The specificity of binding sites of antibodies and pattern recognition receptors has given rise to the so-called "Janeway paradigm", according to which the three-dimensional shape of molecules is the key to understanding immunological function: microbial pathogen-associated molecular patterns (PAMPs) bind specifically to pattern recognition receptors (PRRs) of the host and by "recognizing" signature molecular motifs of pathogens an immunological reaction is triggered. If correct, these molecular structures would be crucial for solving the riddle of how the immune system is able to distinguish between self and non-self – and between harmful and beneficial commensals.
However, this narrative faces a major challenge, as molecular motifs are being shared among pathogens and symbiotic commensals alike. Both express a similar set of molecular patterns that are specific for prokaryotes. Other instances are known in which one and the same molecular motif can trigger opposing immune reactions, depending on the presence or absence of additional signals in the cellular context. It is speculated that a second "danger" signal might be needed in order to trigger an immune response. Whatever the nature of this second signal might be, it will require stepping away from the fixation on molecular patterns. I argue that it is rather structural motifs of networks which carry the explanatory weight in these immunological processes. I suggest to distinguish between different meanings of 'structure' and 'function', to which separate explanatory roles can be attributed. While the three-dimensional shape of signature molecules (structure1) can be used to explain their function1 – understood as biochemical properties and activities – their immunological function2 – biological roles, like immunogenicity – can only be explained with respect to higher-level structures2, i.e. the interaction networks of molecules and cells. These different explanatory roles also imply different explanatory accounts. The former remains within a physico-chemical framework, whereas the latter rather calls for mechanistic and topological explanations.
Studying the interaction topology and dynamics of structures2 with mathematical tools, modeled as signaling games, promises to shed new light on these interaction processes that increasingly get to be described as equilibrium states between multiple interaction partners by immunologists. Rather than focusing only on the presence or absence of molecular signatures, topological properties explain the features of these networks and their activities beyond the molecular interactions between PAMPs and PRRs. This way, opposing effects resulting from the same kind of molecular structure1 can be explained by differences in their "downstream" organizational structure2. While still preserving the centrality of structure-function relationships, I suggest to keep these conceptually different notions of 'structure' and 'function' and their respective explanatory roles apart.
ABSTRACT. In this talk we develop an analysis of dispositions on the basis of causal Bayes nets (CBNs). Causal modeling techniques such as CBNs have already been applied to various philosophical problems (see, e.g., Beckers, ms; Gebharter, 2017a; Hitchcock, 2016; Meek & Glymour, 1994; Schaffer, 2016). Using the CBN formalism as a framework for analyzing philosophical concepts and issues intimately connected to causation seems promising for several reasons. One advantage of CBNs is that they make causation empirically tangible. The CBN framework provides powerful tools for formulating and testing causal hypotheses, for making predictions, and for the discovery of causal structure (see, e.g., Spirtes, Glymour, & Scheines, 2000). In addition, it can be shown that the theory of CBNs satisfies standards successful empirical theories satisfy as well: It provides the best explanation of certain empirical phenomena and can, as a whole theory, be tested on empirical grounds (Schurz & Gebharter, 2016).
In the following we use CBNs to analyze dispositions as causal input-output structures. Such an analysis of dispositions comes with several advantages: It allows one to apply powerful causal discovery methods to find and specify dispositions. It is also flexible enough to account for the fact that dispositions might change their behavior in different circumstances. In other words, one and the same disposition may give rise to different counterfactual conditionals if its causal environment is changed. The CBN framework can be used to study such behavior of dispositions in different causal environments on empirical grounds. Because of this flexibility, our analysis can also provide novel solutions to philosophical problems posed by masks, mimickers, and finks which, one way or another, plague all other accounts of dispositions currently on the market. According to Cross (2012), the “recent literature on dispositions can be characterized helpfully, if imperfectly, as a continuing reaction to this family of counterexamples” (Cross, 2012, p. 116). Another advantage of our analysis is that it allows for a uniform representation of probabilistic and non-probabilistic dispositions. Other analyses of dispositions often either have trouble switching from non-probabilistic dispositions to probabilistic dispositions, or exclude probabilistic dispositions altogether.
The talk is structured as follows: In part 1 we introduce dispositions and the problems arising for classical dispositional theories due to masks, mimickers, and finks. Then, in part 2, we present the basics of the CBN framework and our proposal for an analysis of dispositions within this particular framework. We highlight several advantages of our analysis. In part 3 we finally show how our analysis of dispositions can avoid the problems with masks, mimickers, and finks classical accounts have to face. We illustrate how these problems can be solved by means of three prominent exemplary scenarios which shall stand proxy for all kinds of masking, mimicking, and finking cases.
References
Beckers, S. (ms). Causal modelling and Frankfurt cases.
Cross, T. (2012). Recent work on dispositions. Analysis, 72(1), 115-124.
Gebharter, A. (2017a). Causal exclusion and causal Bayes nets. Philosophy and Phenomenological Research, 95(2), 153-375.
Hitchcock, C. (2016). Conditioning, intervening, and decision. Synthese, 193(4), 1157-1176.
Meek, C., & Glymour, C. (1994, December). Conditioning and intervening. British Journal for the Philosophy of Science, 45(4), 1001-1021.
Schaffer, J. (2016). Grounding in the image of causation. Philosophical Studies, 173(1), 49-100.
Schurz, G., & Gebharter, A. (2016). Causality as a theoretical concept: Explanatory warrant and empirical content of the theory of causal nets. Synthese, 193(4), 1073-1103.
Spirtes, P., Glymour, C., & Scheines, R. (2000). Causation, prediction, and search (2nd ed.). Cambridge: MIT Press.
ABSTRACT. In contrast to selective abduction and other kinds of inferences, creative abduction is intended as an inference method for generating hypotheses featuring new theoretical concepts on the basis of empirical phenomena. Most philosophers of science are quite skeptical about whether a general approach toward such a “logic of scientific inquiry” can be fruitful. However, since theoretical concepts are intimately connected to empirical phenomena via dispositions, a restriction of the domain of application of such an approach to empirically correlated dispositions might be more promising. Schurz (2008) takes up this idea and differentiates between different patterns of abduction. He then argues for the view that at least one kind of creative abduction can be theoretically justified. In a nutshell, his approach is based on the idea that inferences to theoretical concepts unifying empirical correlations among dispositions can be justified by Reichenbach’s (1956) principle of the common cause.
In this talk we take up Schurz’ (2008) proposal to combine creative abduction and principles of causation. We model cases of successful creative abduction within a Bayes net framework that can, if causally interpreted, be seen as a generalization of Reichenbach’s (1956) ideas. We specify general conditions that have to be satisfied in order to generate hypotheses involving new theoretical concepts and describe their unificatory power in a more fine-grained way. This will allow us to handle cases in which we can only measure non-strict (probabilistic) empirical dependencies among dispositions and to shed new light on several other issues in philosophy of science. We consider our analysis of successful instances of creative abduction by means of Bayes net models as another step toward a unified Bayesian philosophy of science in the sense of Sprenger and Hartmann (in press).
The talk is structured as follows: We start by introducing Schurz’ (2008) approach to creative abduction. We also explain how it allows for unifying strict empirical correlations among dispositions and how it can be justified by Reichenbach’s (1956) principle of the common cause. We then briefly introduce the Bayes net formalism, present our proposal how to model successful cases of creative abduction within this particular framework, and identify necessary conditions for such cases. Next we investigate the unificatory power gained by creative abduction in the Bayesian setting and draw a comparison with the unificatory power creative abduction provides in the strict setting. Subsequently, we outline possible applications of our analysis to other topics within philosophy of science. In particular, we discuss the generation of use-novel predictions, new possible ways of applying Bayesian confirmation theory, a possible (partial) solution to the problem of underdetermination, and the connection of modeling successful instances of creative abduction Bayesian style to epistemic challenges tackled in the causal inference literature.
References
Reichenbach, H. (1956). The direction of time. Berkeley: University of California Press.
Schurz, G. (2008). Patterns of Abduction. Synthese, 164(2), 201-234.
Sprenger, J., & Hartmann, S. (in press). Bayesian philosophy of science. Oxford: Oxford University Press.
Adrian Groza (Technical University of Cluj-Napoca, Romania)
Differences of discourse understanding between human and software agents
ABSTRACT. We are interested in the differences between how a human agent and a logic-based software agent interpret a text in natural language. When reading a narrative, the human agent has a single interpretation model. That is the preferred model among the models consistent with the available information. The model is gradually adjusted as the story proceeds. Differently, a logic-based software agent works with a finite set of many models, in the same time. Of most interest is that the number of these models is huge, even for simple narratives. Let the love story between Abelard and Heloise, with the text "Abelard and Heloise are in love". Assume during natural language processing, the statement is interpreted as Abelard is in love and Heloise is in love. The formalisation in First Order Logic is:
(A1) ∃x, love(abelard,x)
(A2) ∃x, love(heloise,x)
How many models does a model generator find for axioms (A1) and (A2)? Using MACE4 [McC03], with the domain closed to 4 individuals, there are 278528 models. All these models are equally plausible for the software agent.
To reduce this number, the agent needs to add several constraints. First, the unique name assumption can be added: (A3) abelard != heloise. Still, there are 163840 models. Second, we assume that the love relation is not narcissistic: (A4) ∀x, ¬love(x,x). That leads to 5120 models. Third, we add the somehow strong constraint that someone can love only one person at a time. That is (A5) love(x,y) ∧ love(x,z) →y=z. The remaining models are 80. Unfortunately, love is not a symmetric relation. Hence, we cannot add the axiom ∀x,y love(x,y)↔love(y,x). Instead we exploit the fact that some of these models are isomorphic. After removing isomorphic interpretations, we keep 74 non-isomorphic models. Note that there are 2 Skolem constants after converting axioms (A1) and (A2). If we are not interested in the love relations of individuals represented by these constants, we can ignore them. This would result in 17 models.
Some observations follow. First, the order in which we apply the reductions is computationally relevant. For instance, it would be prohibitively to search for isomorphic models in the initial two steps, when there are 278528 and 163840 models. Hence, the strategy is to add domain knowledge to the initial narrative discourse, and then to search for the isomorphic structures. Second, which domain knowledge to add is subject to interpretation. For instance, axiom (A5) might be too strong. Third, for some reasoning tasks (e.g. solving lateral thinking puzzles [DBZ10]) keeping all possible models might be desirable. Fourth, we argue that text models built with machine learning applied on big data, would benefit from some crash diet. In this line, we try to extract as much as we can from each statement, instead of statistically analysing the entire corpus. That is, the model of the story is built bottom-up and not top-down as machine learning does.
Both the human reader and the software agent aim to keep the story more intelligible and tractable. But they apply different reduction strategies. On one hand, humans understand stories by inferring the mental states (e.g. motivations, goals) of the characters, by applying projections of known stories into the target narrative [Her04], by extensively using commonsense reasoning [Mue14], or by closing the world as much as possible. On the other hand, logic-based agents reduce the models by formalising discourse representation theories [KR13], by adding domain knowledge, or by identifying isomorphisms.
We also analyse how the number of interpretation models vary as the story evolves. Sentences introducing new objects and relations increase the number of models. Sentences introducing constraints on the existing objects and relations contribute to the removal of some models. Adding domain knowledge also contributes to model removal. One question is how to generate stories that end with a single interpretation model for software agents. Another issue regards the amount of domain knowledge and commonsense knowledge to add, and which reduction strategy is better to keep the number of models computationally feasible.
To sum up, we compare the reduction strategies of humans and software agents to keep the discourse more intelligible and tractable.
[DBZ10] Edward De Bono and Efrem Zimbalist. Lateral thinking. Viking, 2010.
[Her04] David Herman. Story logic: Problems and possibilities of narrative. U of Nebraska Press, 2004.
[KR13] Hans Kamp and Uwe Reyle. From discourse to logic: Introduction to model theoretic semantics of natural language, formal logic and discourse representation theory, volume 42. Springer Science & Business Media, 2013.
[McC03] William McCune. Mace4 reference manual and guide. arXiv preprint, 2003.
Logic as metaphilosophy? Remarks on the mutual relations of logic and philosophy
ABSTRACT. Logic as a metaphilosophy?
The project of logical philosophy was announced by Jerzy Perzanowski as an alternative to various metaphilosophical projects.
1. Philosophy or its history?
It is not known exactly what philosophy is about. The self-identification of a philosopher is more problematic than an artist or a scientist. We should distinguish a philosopher from a historian of philosophy. The former sees problems and tries to solve them. Why should he look for inspiration in the past and not in the surrounding reality?
The reference to old masters is common among philosophers. It is not known what motives philosophers cause, nor is it known to what extent knowledge, say Aristotle's concept of the soul is necessary to understand and solve the contemporary mind-body problem? Treating the history of philosophy as "eternal now" means that every reference to a noble name, showing, and perhaps the nth order, "relationships" between classic A and classic B is treated on the basis of philosophy with equal seriousness as an attempt to solve any problem called philosophical. For example, Pythagorean statements about the mysticism of numbers remain a fascinating curiosity, but in no sense are they part of contemporary mathematics, nor do any of the mathematicians have the impression that this state of affairs in any sense impoverishes his discipline. There are no fundamental contraindications for similar treatment of the past of philosophy.
2. Logic towards philosophy.
I will satisfy myself with the conviction that logic is a systematized set of reliable inference schemes.
It is absolutely not problematic to indicate mutual, possible relations between logic and philosophy. Their mutual relations are, so to speak, multifaceted and multi-level at the same time. I choose the hypothetical separateness of the ranges of logic and philosophy - considering a symmetrical relationship with a different distribution of accents:
(1) The relationship between logic and philosophy - with the accent on logic - has three aspects:
(a) the logic of philosophy - aspect of the relation of logic to other philosophical sciences;
(b) logic in philosophy - aspect of the role of logic in philosophy (criticism of philosophical reasoning);
(c) The third aspect of the relationship between logic and philosophy is the logic of philosophy, which treats the philosophy itself - about the way of organizing and connecting its components and factors.
(2) The relation of philosophy and logic - with an emphasis on philosophy - is analogical and also has three aspects.
(a) The philosophy of logic describes, among others, changes in particular philosophical disciplines in contact with logic;
(b) Philosophy in logic contains criticism of logic, treating, among others, about the philosophical entanglements of logical theories;
(c) The philosophy of logic is associated with philosophy in logic, focusing on determining the status, object, and source of logic.
A purely formal representation of the relationship cannot be controversial due to the fact that all possibilities are considered.
Depending on the attitude to the role of logic in philosophy, I will distinguish prelogical, alogical and illogical philosophy.
The prelogical philosophy is those trends in the history of the discipline that explicitly assume in argumentations or refer to decisions known from logic.
Alogical philosophy is one whose theses are explicitly articulated in isolation from logic, but their interpretation in its language does not enforce complex and problematic hermeneutic procedures. Finally, an illogical philosophy, it is one that programmatically shuns the tools of logic and does not succumb to intersubjective interpretive procedures.
Philosophy would be barren if it could determine the universe of all possibilities, indicate the source and method of generating this space, diversify the objects in it, link them and outline the system that unites them into a mechanism. Such a philosophical task makes it a discipline not only related but even subordinate to logic. At the same time, it has the advantage that the realization of such a task in spite of technical difficulties, does not make this task hopeless.
The result of such a plan of research is to define the conditions that logical philosophy should fulfill - that is, a closed-ended philosophy that is deductively and organized in theories in a logical sense. Treating logic as a metaphilosophy is possible under the condition of deductive ordering (that is, organizing philosophy with logic in theories). The product of logic is not only conceptual analysis (though their role cannot be overestimated) but mainly prepared by them (formal) philosophical theories - that is, expressed in appropriate language and related relations, the chains of assertions.
ABSTRACT. We report some progress [8] in generating numerical evidence for the hypothesis of hypercomputing minds.
There is an ongoing discussion [3] on implications of Goedel’s Incompleteness Theorems on Computationalism.
Roughly speaking, the thesis of classical Computationalism states that Everyone is a Turing Machine of some finite complexity.
In a more formal way, let p denote Persons, m Turing Machines and let cpl(m) measure the complexity of a Turing Machine in terms of states and transitions and let k be an Integer.
According to [4] we might state the thesis of Computationalism as:
[C] ∀p∃m∃k (p=m ∧ cpl(m) ≤k)
Now, recall that there are too many functions f: N→N for them all to be computable. Hence uncomputable functions must exist, [2].
In 1962, Rado, [1], presented the uncomputable function Σ, (aka the Busy Beaver function).
Σ(n) is the largest number of 1's left on the tape by a halting binary n-state Turing machine when started on an all 0-tape. The Σ function is uncomputable, because otherwise it would solve the Halting problem, which is known to be undecidable. Hence, no single Turing Machine can compute Σ for all n.
Recently Bringsjord et al. brought forward [4] a New Goedelian Argument for Hypercomputing Minds based on the Σ function. It is their basic assumption that
[A] If the human mind can compute Σ (n) it is able to eventually compute Σ (n +1).
In [4] a formal argument is presented showing that if [A] holds then [C] cannot hold.
We provide further evidence for [A] by applying novel AI based methods to finally compute the Σ function for one further, hitherto unknown argument, namely n = 5.
Note that Σ(n) is currently known for n=1,2,3,4 but only lower bounds are in place for n > 4, see e.g. [5,6,7].
Our AI based methods enable us to move closer to prove Σ (5) = 4098, by classifying large sets of 5 state binary Turing Machines using data mining on their tape number sequences.
For these sets of TMs, we can identify the recurrence relation R[tpn] of the tape number sequence tpn.
Using then R[tpn] we can produce an automated induction proof for TM: R[tpn] -> R[tpn+1], showing that the Turing machine m associated with R[tpn] does NOT halt and hence does not contribute to Σ(5).
Together with conventional methods like backtracking, we are coming very close to a complete classification of all 5-state, binary TMs and are thus zooming in on computing Σ(5) = 4098.
We thus provided one additional step in the ladder of computing Σ(n) for increasing values of n, providing numerical evidence for Bringsjord [4] new Goedelian Argument [A] which in turn is another piece of evidence that the Human Mind is operating above the Turing Limit.
References
[1] T.Rado, Bell System Technical Journal, 41, 1962, 877-884
[2] G.S.Boolos, R.C.Je⁄rey Computability and Logic, Cambridge, 1989
[3] L.Horsten , P.Welch , Gödel's Disjunction: The scope and limits of mathematical knowledge, Oxford U Press, 2016
[4] S.Bringsjord et al., A New Goedelian Argument for Hypercomputing Minds Based on the Busy Beaver Problem, Journal of Applied Mathematics and Computation, 176, 2006, 516
[5] P..Michel, Arch. Math. Logic, 32, 1993, 351-367
[6] R.Machlin, Q.Strout, Physica D, 42, 1990, 85-98, and references therein
[7] H.Marxen, J.Buntrock, Bull. EATACS, 40, 1990, 247-251
[8] J.Hertel, The Computation of the Value of Rado’s Non-Computable Σ Function for 5-State Turing Machines, work in progress, coming soon
Mechanistic Explanations and Components of Social Mechanisms
ABSTRACT. This paper addresses the question of what the components of social mechanisms in mechanistic explanations of social macro-phenomena must be (henceforth “the question of components”). The analytical sociology’s initial position and new proposals by analytical sociologists are discussed. It is argued that all of them are faced with outstanding difficulties. Subsequently, a minimal requirement regarding the components of social mechanisms is introduced.
The two core ideas of analytical sociology are the principle of mechanism-based explanations and structural individualism (Hedström 2005). It is believed that, in social sciences, the principle of mechanism-based explanations implies structural individualism. The reason is that there are social mechanisms only at the individual level (Hedström 2005). The commitment with structural individualism leads to analytical sociology’s initial position regarding the question of components: a social mechanism in an explanation of a social macro-phenomenon must be composed of individuals, their properties, actions, and relations. In response to that proposal, it has been argued that it is unlikely to be the case that all social mechanisms are at the individual level (Vromen 2010). Therefore, the principle of mechanism-based explanations does not imply structural individualism.
Given that critique against analytical sociology’s initial position, different answers have been raised by analytical sociologists. Michael Schmid (2011), who maintains analytical sociology’s initial position, introduces a new argument in support of structural individualism. He argues that mechanism-based explanations require laws and that, in social sciences, laws are available only at the individual level. The problem is that his argument is based on a very questionable premise. Mechanistic explanations do not require laws. In fact, they have been developed as an alternative to those explanations that require laws.
Unlike Schmid, Ylikoski (2012) partially gives up the analytical sociology’s initial position. He distinguishes between causal and constitutive mechanism-based explanations, and maintains analytical sociology’s initial position only regarding constitutive explanations. He considers that in constitutive mechanism-based explanations structural individualism must be fulfilled, because constitutive relevant entities, properties, activities, and relations are always at the individual level. Structural individualism, as it has been noted, leads to the analytical sociology’s initial position. However, it is unlikely to be the case that constitutive relevant entities, properties, activities, and relations are always at the individual level. Consider properties of parliaments (e.g. being conservative). Properties of political parties, which are not at the level of individuals, are constitutive relevant for them.
There is not a fixed level to which components of social mechanisms must always belong. Neither in causal mechanism-based explanations nor in constitutive mechanism-based explanations, components of social mechanisms must be always at certain fixed level. Nevertheless, a minimal requirement can be raised: a component of a social mechanism in a mechanistic explanation of a social macro-phenomenon must not have the explanandum phenomenon as a part (proper or improper) of it. This requirement applies to both causal and constitutive explanations.
References
-Hedström, Peter. 2005. Dissecting the Social: On the Principles of Analytical Sociology. New York: Cambridge University Press.
-Schmid, Michael. 2011. “The logic of mechanistic explanations in the social sciences”. In Analytical Sociology and Social Mechanisms, edited by Pierre Demeulenaere, 136-153. Cambridge: Cambridge University Press.
-Vromen, Jack. 2010. “MICRO-Foundations in Strategic Management: Squaring Coleman’s Diagram”. Erkenntnis 73: 365-83.
-Ylikoski, Petri. 2012. “Micro, Macro, and Mechanisms”. In The Oxford Handbook of Philosophy of Social Science, edited by Harold Kincaid, 21-45. New York: Oxford University Press.
Against Reductionism: Naturalistic methods in pragmatic cognitive sociology
ABSTRACT. We all need space for thought, for debating, for reading, for creating. Naturalistic studies on cognition are paramount in sociology (Schutz, 1973; Clark, 2008, Sennett, 2004; Muntanyola-Saura, 2014). However, the deeply influential culturalist branch of cognitive sociology (CCS) reduces cognition to a cognitivist psychological level (Lizardo & Strand, 2010). Individuals are supposed to react automatically, with no systematicity nor reflexivity, to external stimuli from the social environment. The socialization process and the linguistic and conceptual content of thought becomes secondary. We pinpoint here the key ontological, epistemological, and methodological mistakes of CCS and provide some theoretical alternatives from pragmatic sources. We claim that it is possible for the social sciences to draw from a naturalistic paradigm on cognition without falling for reductionistic or atomistic accounts. Following Kirsh (1995) and Searle (2010) the CCS cognitivist position is readily refutable. Decision-making as defined by CCS is an unconscious, rule-following, individual activity that is not directly shaped by social factors. However, cognition is not only a local psychological product, but is a part of an institutional context. As Feyerabend (1987) puts forward, cultural cognition is a historical legitimate practice of citizens that haven been socially recognized as performers of a particular role. An individual steps out of the constrained functional role of a human trader or robot as defined by CCS by engaging in a conversation-friendly framework. Subjective cognition becomes legitimate in terms of judgement if it is publicly shared in argumentation. Moreover, conversation is an everyday tool for legitimation (Berger, 2016). Pierce's principle of habitualization (Talisse and Hester, 2004) builds everyday conversation and practice. The rules of the trade of real life cognition happen in interactions where participants are trying to know more about the the sensorial or conceptual object that is being aprehended (Lieberman, 2013). Cognitive actors are dependent on an institutional context that is always fleeting. So cognitive decision-making varies in space and time, and is based on reciprocity in a mutual determining relationship (DeNora, 2014). Ethnographic and ethnomethodological work on artists, experts and laypeople alike show how cognition takes place in a circle of distributed attention. Judgment comes with a shared act of attention (Hennion, 2005). Dialog is multimodal (Alac, 2017) and selective. Artists and experts develop a public language among themselves, and they do so by filtering and sharing their individual experience. In James' (1890: 139) words, consciouness is at all times a selecting agency. The rules of talking include evaluation and classification. Every conversation must either converge into consensus or diverge towards disensus. Knowing is to know in the first instance collectively rather than individually (Talisse and Hester, 2004). The inferences and/ or judgments that we form progressively on interactions are transformed in structural accounts (Cicourel, 2002). In other words, the actuality of the exchange and the availability of judgement shape the sequentiality of cognition Judgement happens at the present moment and doesn´t necessarily preexist at the neuronal or individual level.In all, the pragmatic paradigm is able to capture the detail of cognition and break down its specific dimensions as a valid object for the social sciences.
Bibliography
Alac, M. (2017). We like to talk about smell: A worldly take on language, sensory experience, and the Internet, Semiotica, 215: 143–192
Cicourel, A. (2002). The Interaction of Discourse, Cognition and Culture. Discourse Studies, 8 (1), 25-29.
Clark, A. (2008). Supersizing the Mind. Oxford: Oxford University Press.
DeNora, T. (2014). Making Sense of Reality: Culture and Perception in Everyday Life. London: Sage.
Feyerabend, P. 1987. “Creativity - A Dangerous Myth”. Critical Inquiry 13 (4): 700-711.
James, W. (1890). Principles of Psychology. Harvard University Press.
Hennion, A. (2005). Pragmatics of Taste. In Jacobs M. & Hanranhan, N. The Blackwell Companion to the Sociology of Culture, Oxford, Blackwell, pp.131-144.
Kirsh, D. (1995). The Intelligent Use of Space. Artificial Intelligence, 73: 31-68.
Lieberman,K. (2013). MORE Studies in Ethnomethodology: studies of the in vivo organization of sense. State University of New York Press.
Lizardo, O. & Strand, M. (2010). Skills, toolkits, contexts and institutions: Clarifying the relationship between different approaches to cognition in cultural sociology. Poetics, 38: 204–227.
Muntanyola-Saura, M. (2014). A cognitive account of expertise: Why Rational Choice Theory is (often) a Fiction. Theory & Psychology, 24:19-39.
Schütz, A. (1972). Collected Papers vol. I. The Netherlands: Springer.
Searle, J. R. (2010). Cognitivism. Unpublished Lecture Notes. Berkeley: University of California, Berkeley.
Sennett, R. (2004). Together. London: Penguin.
Talisse, R. B. & Hester, M. (2004). On James. Belmont, CA: Wadsworth-Thompson.
CANCELLED: Higher-order identity in the necessitism-contingentism debate in higher-order modal logic
ABSTRACT. Timothy Williamson (2013) defends necessitism, the thesis that everything exists necessarily—(NE) □∀x□ⱻyx = y, which is valid in the models with constant domains for standard quantified modal logic S5 (QML-S5). Still, several contingentist’s quantificational modal logics with variable domains (CQML) have been constructed, including Yang’s (2012) universally free modal logic (CQML-Y), taking as the underlying system a free logic with specified syntactic stipulations on the occurrences of names and free variables in de re contexts. If we introduce a new predicate ‘Cx’(‘x is chunky’, meaning that the object x is either abstract or concrete) and add to QML-S5 two axioms (i) ∀x◊Cx (the ‘ontic-chunky constraint’), and (ii)∀x□(Fx → Cx) (the ‘predicative-chunky constraint’), to get a necessitist’s quantificational modal logic (NQML), a mapping between CQML-Y and NQML can be then established in that a sentence sN is a theorem in NQML if and only if its translation sc is a theorem of CQML-Y. Thus, quantified modal logic has no authority to decide between necessitism and contingentism.
Williamson further argues that higher-order modal logic favours necessitism over contingentism by comparing both in terms of their interaction with plausible candidate comprehension principles. In particular, an unrestricted modal comprehension principle would generate the higher-order analogues of the thesis for necessitism, (NE2) □∀F□ⱻG(F ≈ G), where ≈ abbreviates some higher-order analogue of the identity predicate of first order. Intuitively (NE2) involves higher-order identity of first-order predication. But Williamson takes it as an assertion about the necessity of an equivalence relation, characterized in terms of co-extensiveness. An unrestricted modal comprehension principle, (Comp-M)-ⱻX□∀x(Xx ↔ A), then ‘creates an awkward logical asymmetry between the first and higher orders for contingentism; typically, contingent objects have non-contingent haecceities’.
Nonetheless, Williamson’s defence is unconvincing in two aspects. Firstly, one cannot claim that ‘F ≈ G’ is equivalent relation, rather than identity, simply because the later may bring in the sort of ontological commitment of higher-order quantification. To be fair to both sides, no more ontological commitment beyond first order should be allowed. Also, Quine’s ontology-ideology distinction would no longer work. Williamson’s appeal to the so-call metaphysical commitment seems ungrounded. I shall call ‘the predicative commitment’ the totality of whatever the higher-order variables range over, which would function in a way similar to the assignments for first order variables in Tarski’s formal semantics. The second one is concerned with unrestricted comprehension principles. Williamson argues that with these principles, identity holds even for non-chunky objects, which accordingly, have haecceities. But this is unjustified. We may impose a further constraint, namely, ∀X□∀x(Xx → Cx)—a higher-order analogue of the predicative-chunky constraint. Equipped with this, a more promising comprehension principle would be like this: (Comp-MC)-ⱻX□∀x((Xx ∧ Cx) ↔ A)), which claims that there must be some universal property that every chunky object has, and haecceity is one of them, i.e. ∀y□ⱻX□∀x((Xx ∧ Cx) ↔ x = y). The contingentists would accept this.
References
Williamson, T. 2013. Modal Logic as Metaphysics. Oxford: Oxford University Press.
Yang, Syraya C.-M. 2012. A universally-free modal logic. In Proceedings of the Eleventh Asian Logic Conference, T. Arai, Q. Feng, B. Kim, G. Wu, and Y. Yang (eds.). Singapore: World Scientific Publishing Co., pp. 159-180.
11:30
Jui-Lin Lee (Center for General Education and Dept. of CSIE, National Formosa University, Taiwan)
Model Existence in Modal Logics
ABSTRACT. According to [3], there are BCI based propositional/predicate logics which satisfies classical model existence property (every consistent set has a classical model). Such logics are weak: the usual deduction theorem (as a property) does not hold. In this talk we will study model existence property in BCI (or BCIW) based modal logics. Glivenko's Theorem with respect to the corresponding systems will also be investigated.
Reference:
[1] Jui-Lin Lee, Classical model existence theorem in propositional logics, in Perspectives on Universal Logic,
edited by Jean-Yves Beziau and Alexandre Costa-Leite, pages 179-197, Polimetrica, Monza, Italy, 2007.
[2] Jui-Lin Lee, Classical model existence and left resolution, Logic and Logical Philosophy Vol. 16, No. 4, 2007, pages 333-352.
[3] Jui-Lin Lee, Classical Model Existence Theorem in Subclassical Predicate Logic. II, in Philosophical Logic: Current Trends in Asia, Proceedings of AWPL-TPLC 2016, edited by S.C.-M. Yang, K.Y. Lee and H. Ono, pages 197-212, Springer, 2017.
Andrey Pavlenko (Institute of Philosophy, Russian Academy of Sciences, Russia)
Frege semantics or why can we talk about deflation of false?
ABSTRACT. The followers of the deflationary model of truth [1;3;4] believe that, having easily coped with the task of eliminating "truth", they finally can solve most difficult problem for them – the need to include truth as a value of propositions in scientific knowledge. Have done it, they are faced with the need of elimination the "false." But this leads to a paradox. To justify this let’s remind Frege's statements:
(I). Every true sentence denotes the truth;
(II). Every false sentence denotes the false [2].
Now let’s consider two types of sentences: (1) "True sentence," in Frege's sense, is a sentence that itself has that sign ("to be true"), which it designates as an object. The suggestion, that the "true sentence", which "denotes truth as an object," is itself not true leads us to the absurd. Now consider the second type of sentence - "false" (defined as negation of "true sentences ( p)” : (2) A "false sentence" is not a true sentence, that is, ¬ р . Further will give more complete definition:
(A) The sentence that «true sentences have the feature of "being true"» is a sentence in which some term p (" true sentence") itself has that feature of X (" to be true "), which it designates as an object.
Consequently, according to the definition of false sentences (2), it turns out that the sentence "every false sentence denotes a false" should look like this:
(B) "Any non-true sentence (¬ p) denotes non-truth Y" Further ask two questions: Question (α), «Can the proposition "Every true sentence which denotes the truth" be true?». Answer is “yes, it can”.
Question (β): «Сan the proposition “Every false sentence which denotes the false" be false?». In this case we face a problem. To demonstrate it let’s understand the quoted statement (II) "every false sentence denotes false" - only the name of that false sentence, which denotes false. At the same time, the unquoted expression that every false sentence denotes false we will regard as an object judged by the sentence-name (quoted). Further let’s give our expression stricter form in accordance with (2) " ¬ р."
We get: The sentence "every false sentence denotes false" denotes every false sentence denoting false, if and only if, "every false sentence denoting false" is not a sentence denoting the truth.
Now, instead of expression "denoting the truth", we substitute the definition of a sentence denoting the truth:
The sentence "every false sentence denotes false" denotes every false sentence denoting false, if and only if, "every false sentence denoting false" is not every true sentence denoting the truth.
As a result, we come to the paradox: "false sentences at the same time have the sign "to be false (not true)" (¬ p) and do not “have them (¬ ¬ p)", or, given that (¬ ¬ p = p) in formal expression: ¬ p ↔ p
The final result is: it is impossible to agree with Frege in this matter - such object as "false" does not exist.
REFERENCES
[1] Ayer A.J. 'The Criterion of Truth' – Analysis, v.3, No.3, Jan.1936.pp.28-29
[2] Frege G., Sense and reference//The Philosophical review, Volume 57, Issue 3, (May 1948), pp.209-230.
[3] Quine W.V. Philosophy of Logic. Englewood Cliffs, Prentice Hall. 1970.
[4] Tarski A. The Semantic Conception of Truth and the Foundations of Semantics // Philosophy and Phenomenological Research, 1944, v. 4, n. 3, pp. 341–375.
11:30
Pavel Arazim (Czech Academy of Sciences, Instutute of Philosophy, Department of Logic, Czechia)
Are logical expressions ambiguous and why?
ABSTRACT. The relation of logical expressions from natural languages and those of formal logics has been an object of much study and many opinions. Already Frege used an interesting simile, comparing formal logic to a microscopce and everyday reasoning to bare eye. New challenges, though, appear in face of plurality of logics. Are some logics constructed in such a way that their constants are more faithful to their counterparts in natural language? And is it even a virtue, given Frege´s metaphor? I will examine what might seem to be the most natural way of coping with this issue, namely that logical expressions of everyday language are ambiguous and the formal systems spell out their more definite and exact shapes (this view is present in works of many authors, for example in Logical Pluralism by Beall and Restall). As this view has a lot of appeal, I want to highlight the ways in which it can be misleading and should at the very least be ammended, if not abandoned.
The view I just sketched and which I want to criticize can be rendered more precise by saying that logical expressions of natural language are in fact disjunctions of various meanings. Thus the word 'not' is on some occasions used to mean classical negation, sometimes to mean the intuitionistic one, sometimes others yet. The fact, though, is that new logical systems keep being developed and therefore if this view were right, we could never be sure which possibilities there are in the mentioned disjunctions of meanings. Furthermore, questions arise about which of the disjuncts of the purported meaning one must know in order to be called a competent user of the given expression.
I therefore propose a different view of meaning in general, which will have interesting consequences for the meanings of logical expressions. I regard meaning as constituted by rules regulating the use of a given expression and these rules are constituted by an interplay of normative attitudes of members of a given community of speakers (this understanding is originally due to Brandom and can be also found, for example, in Peregrin). These attitudes are continuous and never in fact accomplished. What we regard as right has to be retold anew in contexts we happen to find ourselves in and thefore the meanings of our expressions keep being developed. Regarding meaning as something static in a given state thus means distorting it in a significant way. Understanding a given expression esssentially involves partaking in its further developement, not just storaging it in one’s memory.
Applying this approach to logical expressions means seeing them rather as the very movement between the various shapes we know from formal logics, not as a conglomerate or disjunction of these shapes. Formal logics thus also attain a different role than the one usually ascribed to them. They are here to modify, rather than to capture our usage of logical expressions.
Beall, J. & Restall; Logical pluralism; Oxford University Press; 2006.
Robert Brandom; Making it explicit; Cambridge: Harvard University Press; 1994.
Jaroslav Peregrin; Inferentialism: Why rules matter, Palgrave Macmillan, 2014.
New Thoughts on Compositionality. Contrastive Approaches to Meaning: Fine’s Semantic Relationism vs. Tarski-Style Semantics
ABSTRACT. The paper is an assessment of compositionality from the vantage point of Kit Fine’s semantic relationist approach to meaning. This relationist view is deepening our conception about how the meanings of propositions depend not only on the semantic features and roles of each separate meaningful unit in a complex but also on the relations that those units hold to each other. The telling feature of the formal apparatus of this Finean relationist syntax and semantics, viz. the coordination scheme, has some unexpected consequences that will emerge against the background of an analogy with the counterpart theoretic semantics for modal languages.
The semantic-relationist program defends ‘referentialism’ in philosophy of language; Fine holds that semantic relations that have to be added to the assigned intrinsic values in our semantic theory, especially the relation which he calls ‘coordination’, can do much of the work of the (Fregean) senses. A relationist referentialism has certain important explanatory virtues which it shares with the Fregean position, but the former is better off ontologically than the latter, since it is not committed to the existence of senses.
The paper is examining some philosophical presupposition of semantic relationism and discusses in a critical manner why and how the other semantic systems (notably Tarski’s semantics) got wrong the specification of the truth conditions and of the meaning conditions for the sentences of the languages.
References
Fine, Kit, Semantic Relationism, Wiley-Blackwell, 2009.
Lewis, David K., Philosophical Papers, Volume 1, Oxford University Press, 1983.
Tarski, Alfred, Logic, Semantics, Metamathematics, Hackett Pub Co Inc, 2nd Editioin, 1983.
I use Einstein’s theory of relativity to draw out some lessons about the defining features of an objective description of
reality. I argue, in particular, against the idea that an objective description can be a description from the point of view of no-one in
particular.
How to describe reality objectively: lessons from Einstein
ABSTRACT. I use Einstein’s theory of relativity to draw out some
lessons about the defining features of an objective description of
reality. I argue, in particular, against the idea that an objective
description can be a description from the point of view of no-one in
particular.
Philipp Lücke (Mathematisches Institut, Universität Bonn, Germany)
Definable bistationary sets
ABSTRACT. The results presented in this talk are motivated by the question whether sets that are constructed with the help of the Axiom of Choice can have simple definitions. In this talk, I want to focus on the definability of bistationary subsets of uncountable regular cardinals, i.e. stationary subsets of such cardinals whose compliment is also stationary. I will first present results that show that the right interpretation of the above question is to ask whether canonical extensions of the axioms of $\mathsf{ZFC}$ imply that for certain uncountable regular cardinals $\kappa$, no bistationary subset of $\kappa$ is definable by a $\Sigma_1$-formula that only uses $\kappa$ and sets of hereditary cardinality less than $\kappa$ as parameters. Next, I will present results that show that extensions of $\mathsf{ZFC}$ through large cardinal assumptions or forcing axioms imply that no bistationary subset of the first uncountable cardinal $\omega_1$ is simply definable in this sense. Finally, I will present very recent work that can be used to establish equiconsistency results between the existence of infinitely measurable cardinals and the non-existence of very simply definable bistationary subsets of successors of singular cardinals.
ABSTRACT. A traditional way to compare and relate logics (and mathematical theories) is through the definition of translations/interpretations /embeddings. In the late twenties and early thirties of last century, several such results were obtained concerning some relations between classical logic (CL), intuitionistic logic (IL) and minimal logic (ML), and between classical arithmetic (PA) and intutionistic arithmetic (HA). In 1925 Kolmogorov proved that classical propositional logic (CPL) could be translated into intuitionistic propositional logic (IPL). In 1929 Glivenko proved two important results relating (CPL) to (IPL). Glivenko’s first result shows that A is a theorem of CPL iff ¬¬A is a theorem of IPL. His second result establishes that we cannot distinguish CPL from IPL with respect to theorems of the form ¬A. In 1933 Gödel defined an interpretation of PA into HA, and in the same year Gentzen defined a new interpretation of PA into HA. These interpretations/translations/embeddings were de- fined as functions from the language of PA into some fragment of the language of the HA that preserve some important properties, like theoremhood. In 2015 Dag Prawitz (see [3]) proposed an ecumenical system, a codification where classical logic and the intuitionistic logic could coexist “in peace”. The main idea behind this codification is that the classical logic and the intuitionistic logic share the constants for conjunction, negation and the universal quantifier, but each has its own disjunction, implication and existential quantifier. Similar ideas are present in Dowek [1] and Krauss [2], but without Prawitz’ philosophical motivations. The aims of the present paper are: (1) to investigate the proof theory and the semantics for Prawitz’ Ecumenical system (with a particular emphasis on the role of negation), (2) to compare Prawitz’ system with other ecumenical approaches, and (3) to propose new ecumenical systems.
References
1. Dowek, Gilles, On the definitions of the classical connective and quantifiers, Why is this a proof? (Edward Haeusler, Wagner Sanz and Bruno Lopes, editors), College Books, UK, 2015, pp. 228 - 238.
2. Krauss, Peter H., A constructive interpretation of classical mathematics, Mathematische Schriften Kassel, preprint No. 5/92 (1992)
3. Prawitz, Dag, Classical versus intuitionistic logic, Why is this a proof? (Edward Haeusler, Wagner Sanz and Bruno Lopes, editors), College Books, UK, 2015, pp. 15 - 32.
Modal Negative Translations as a Case Study in The Big Programme
ABSTRACT. This talk is about negative translations—Kolmogorov, Gödel-Gentzen, Kuroda, Glivenko and their variants—in propositional logics with a unary normal modality. More specifically, it addresses the question whether negative translations as a rule embed faithfully a classical modal logic into its intuitionistic counterpart. As it turns out, even the Kolmogorov translation can go wrong with rather natural modal principles. Nevertheless, one can isolate sufficient syntactic criteria for axioms (“enveloped implications”) ensuring adequacy of well-behaved (or, in our terminology, “regular”) translations. Furthermore, a large class of computationally relevant modal logics—namely, logics of type inhabitation for applicative functors (a.k.a. “idioms”)—turns out to validate the modal counterpart of the Double Negation Shift, thus ensuring adequacy of even the Glivenko translation. All the positive results mentioned above can be proved purely syntactically, using the minimal natural deduction system of Bellin, de Paiva and Ritter extended with Sobociński-style additional axioms/combinators. Hence, mildly proof-theoretic methods can be surprisingly successfully used in “the Big Programme" (to borrow F. Wolter and M. Zakharyaschev's phrase from the "Handbook of Modal Logic") .
Most of this presentation is based on results published with my former students, who provided formalization in the Coq proof assistant. In the final part, however, I will discuss variants of a semantic approach based either on a suitable notion of subframe preservation or on a generalization of Wolter’s “describable operations”. An account of this semantic approach and comparison with the scope of the syntactic one remain unpublished yet.
ABSTRACT. Modern set theory presents us with the very curious case of a mathematical discipline whose practice has been decidedly pluralistic in flavor during the last decades. The typical work of a set theorist today consists of starting from a model of set theory and building up different models which not only can be, but usually are, incompatible with one another in the sense of fulfilling mutually inconsistent mathematical statements. This practice is so mathematically fruitful that nowadays there is a whole cosmos of set-theoretic models that are very well researched and worked on, but which contradict each other to the point that they seem to represent different “kinds of set theories”. Recent programs in the philosophy of set theory try to resolve this situation in the most diverse ways, from claiming a pluralistic platonism to trying to return to the picture of single, “true” model of mathematics.
In this talk we want to explain how such a pluralistic practice evolved in the 1960’s; why and how it has been not only tolerated but pursued until now; and we want to analyze various strategies that have been proposed to deal with the scientific pluralism implicit in this practice.
14:30
Jody Azzouni (Department of Philosophy, United States)
Informal Rigorous Mathematics and its Logic
ABSTRACT. :
Abstract: Mathematical practice in all its forms, and despite formidable technicalities, is a natural-language practice. Thus, the logic(s) of that practice is implicit; and, in turn—like claims about the logic(s) of natural language—what logic or logics are operative in mathematics is an empirical question. There is a normative question in the neighborhood. Regardless of what the descriptive situation vis-à-vis the logic(s) of mathematics is discovered to be, we can nevertheless ask the further question: what logic(s) should mathematics be governed by? This further question requires clarity about the function(s) of mathematics. It’s important to realize that although mathematics is a natural-language practice, the answers to both the descriptive and the normative questions about mathematics and natural languages, generally, needn’t be the same.
The gold standard for logic, I suggest, is Frege’s. If we compare some form of syllogistic logic, or one place-predicate logic, with the standard predicate logic, we find an enormous advance in terms of formalizing the reasoning of mathematical proof: the project of characterizing classical mathematical reasoning seems solved by the use of this logic. This is because predicate relations can be represented in the standard predicate calculus and they are crucial to mathematical reasoning. In making this claim, it needs to be shown that classical higher-order logics don’t provide characterizations of mathematical theorizing that go beyond what’s available in the first-predicate calculus—but this can be shown. In particular, categoricity results in a first-order logical system are always available even without those results being true of such systems.
The important question is whether a similar advance is possible, taking us beyond the standard predicate calculus. A tempting possibility is afforded by the many examples in the history of mathematics where apparently inconsistent principles were employed. (Euler’s free and easy ways with infinite series are often cited; infinitesimal reasoning is another example.) I claim that there isn’t anything here that motivates characterizing mathematical practice according to non-classical logics that allow true contradictions, or the like.
Pluralist conceptions of the logic(s) of mathematics, however, can be motivated by considerations independent of the foregoing. This is because of the global reach of mathematics. One can simply study in mathematics any subject area subject to any logic one imagines: set theory in a multivalued logic, intuitionistic real analysis, and so on. Furthermore, although it’s not uncommon to study these subject areas from a “classical” viewpoint, that isn’t required. To speak somewhat picturesquely, one can use an intuitionistic metalanguage to derive results about intuitionistic set theory.
Applying mathematics to empirical sciences—I claim—is a different matter. Here the mathematics must function under one logical umbrella: whichever logic is operative in the sciences. I claim that the holism typical of the sciences—that results from any area may be applied to any other—requires the over-arching logic to be one, and (as of now, at last) to be classical. Pluralism in mathematics itself is a truism; pluralism in applied mathematics is forbidden.
ABSTRACT. Interdisciplinarity is increasingly considered recommendable for pursuing the goals of science. It is encouraged and supported by science policy makers and research funding agencies, while the rest of the institutional structure of science is not perfectly prepared for accommodating it. Interdisciplinarity is often conceived and recommended in terms of programmatic ideals that depict it as symmetric and equal between disciplines. These relationships can be considered in many ways, such as in terms of symmetries in collaboration, understanding, appreciation, contribution, and benefit. These symmetries are often presented as virtues of genuine or otherwise advisable or successful interdisciplinarity. Just as interdisciplinarity in general, these claims about symmetry mostly remain under-examined, and they furthermore are much of the time mistaken. Philosophy of science is in the position to provide some useful community service on the matter. I will make two claims and sketch arguments for them.
[1] Asymmetries abound between disciplines and research fields, and they are vastly diverse. Just a glance at actual scientific practice reveals major asymmetries. Considering the simple case of just two disciplines D1 and D2, the possible asymmetries between them range from instrumental asymmetries, wherein D1 provides D2 with techniques, principles, auxiliary theories, or evidence; to critical asymmetries, wherein D1 sets out, or is used, to criticize or revise the contents or ways of functioning of D2; to imperialistic asymmetries, wherein D1 dominates or invades or subsumes D2; to discriminatory asymmetries, wherein D1 dismisses D2 or discriminates against D2. Naturally, the boundaries between such asymmetries are not sharp; and they can be divided into further sub-types, depending on the precise relationship between D1 and D2.
[2] Each such asymmetry requires a distinct normative evaluation in terms of (ultimate) epistemic advantage. Another diversity complicates this task, that of epistemic advantage. Given that there are numerous kinds and criteria of epistemic advantage, and that they come in different degrees of (in)directness, no generalized
evaluation of either symmetry or asymmetry between disciplines is available. Many asymmetries are not just tolerable but recommendable, while others are problematic. Such judgements are however not easy to make, as yet another complication encumbers the epistemic evaluation, namely the involvement of disciplinary emotions in interdisciplinary relations.
Examples come from inherently interdisciplinary disciplines such as archaeology and sustainability science wherein many kinds of asymmetry prevail between natural and social sciences; and from applications of rational choice theory and game theory across social sciences and humanities. These cases also illustrate the epistemic and emotional contestability of many claims about interdisciplinary asymmetries.
How scientists are brought back into science – The error of empiricism
ABSTRACT. This paper aims at a contribution to critically investigate whether human-made scientific knowledge and the scientist’s role in developing it, will remain crucial – or can data-models automatically generated by machine-learning technologies replace scientific knowledge produced by humans?
Influential opinion-makers claim that the human role in science will be taken over by machines. Chris Anderson’s (2008) provocative essay, The End of Theory: The Data Deluge Makes the Scientific Method Obsolete, will be taken as an exemplary expression of this opinion.
The claim that machines will replace human scientists can be investigated within several perspectives (e.g., ethical, ethical-epistemological, practical and technical). This chapter focuses on epistemological aspects concerning ideas and beliefs about scientific knowledge. The approach is to point out epistemological views supporting the idea that machines can replace scientists, and propose a plausible alternative that explains the role of scientists and human-made science, especially in view of the multitude of epistemic tasks in practical uses of knowledge. Whereas philosophical studies into machine learning often focus on reliability and trustworthiness, the focus of this chapter is on the usefulness of knowledge for epistemic tasks. This requires to distinguish between epistemic tasks for which machine learning is useful, versus those that require human scientists.
In analyzing Anderson’s claim, a kind of double stroke is made. First, it will be made plausible that the fundamental presuppositions of empiricist epistemologies give reason to believe that machines will ultimately make scientists superfluous. Next, it is argued that empiricist epistemologies are deficient because it neglects the multitude of epistemic tasks of and by humans, for which humans need knowledge that is comprehensible for them. The character of machine learning technology is such that it does not provide such knowledge.
It will be concluded that machine learning is useful for specific types of epistemic tasks such as prediction, classification, and pattern-recognition, but for many other types of epistemic tasks —such as asking relevant questions, problem-analysis, interpreting problems as of a specific kind, designing interventions, and ‘seeing’ analogies that help to interpret a problem differently— the production and use of comprehensible scientific knowledge remains crucial.
References:
Abu-Mostafa, Y.S., Magdon-Ismail, M., and Lin, H-T. (2012). Learning from data. AMLbook.com
Alpaydin, E. (2010). Introduction to machine learning. The MIT Press: Cambridge.
Anderson, C. (2008). The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. Wired Magazine June 23, 2008. Retrieved from: https://www.wired.com/2008/06/pb-theory/
Bogen, J., & Woodward, J. (1988). Saving the Phenomena. The Philosophical Review, 97(3), 303-352. doi:10.2307/2185445
Boon, M., & Knuuttila, T. (2009). Models as Epistemic Tools in Engineering Sciences: a Pragmatic Approach. In A. Meijers (Ed.), Philosophy of technology and engineering sciences. Handbook of the philosophy of science (Vol. 9, pp. 687-720): Elsevier/North-Holland
Humphreys, P. (2009). The philosophical novelty of computer simulation methods. Synthese, 169(3), 615-626. doi:10.1007/s11229-008-9435-2
Nersessian, N. J. (2009). Creating Scientific Concepts. Cambridge, MA: MIT Press.
Suppe, F. (1974). The Structure of Scientific Theories (1979 (second printing ed.). Urbana: University of Illinois Press.
Suppe, F. (1989). The Semantic Conception of Theories and Scientific Realism. Urbana and Chicago: University of Illinois Press.
Suppes, P. (1960). A Comparison of the Meaning and Uses of Models in Mathematics and the Empirical Sciences. Synthese, 12, 287-301.
Van Fraassen, B. C. (1977). The pragmatics of explanation. American Philosophical Quarterly, 14, 143-150.
Van Fraassen, B. C. (1980). The Scientific Image. Oxford: Clarendon Press.
Modern connexive logic started in the 1960s with seminal papers by Richard B. Angell and Storrs McCall. Connexive logics are orthogonal to classical logic insofar as they validate certain non-theorems of classical logic, namely
Systems of connexive logic have been motivated by considerations on a content connection between the antecedent and succedent of valid implications and by applications that range from Aristotle's syllogistic to Categorial Grammar and the study of causal implications. Surveys of connexive logic can be found in:
*Storrs McCall, "A History of Connexivity", in D.M. Gabbay et al. (eds.), Handbook of the History of Logic. Volume 11. Logic: A History of its Central Concepts, Amsterdam, Elsevier, 2012, pp. 415-449.
*Heinrich Wansing, "Connexive Logic", in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2014 Edition).
There is also a special issue on connexive logics in the IfCoLog Journal of Logics and their Applications. The entire issue is available at: http://collegepublications.co.uk/ifcolog/?00007
As we are observing some growing interests in topics related to connexive logics, collecting attention from researchers working on different areas within philosophical logic, the symposium aims at discussing directions for future research in connexive logics. More specifically, we will have talks related to modal logic, many-valued logic, probabilistic logic, relevant (or relevance) logic and conditional logic, among others. There will also be some connections to experimental philosophy and philosophy of logic.
Niki Pfeifer (Department of Philosophy, University of Regensburg, Germany)
Are connexive principles coherent?
ABSTRACT. While there is a growing interest in studying connexive principles in philosophical logic, only little is known about their behaviour under uncertainty. In particular: can connexive principles be validated in probability logic? Among various approaches to probability, I advocate the coherence approach, which traces back to Bruno de Finetti. In my talk, I discuss selected connexive principles from the viewpoints of coherence-based probability logic (CPL) and the theory of conditional random quantities (CRQ). Both CPL and CRQ are characterised by transmitting uncertainty coherently from the premises to the conclusions. Roughly speaking, the theory of CRQ generalises the notion of conditional probability to deal with nested conditionals (i.e., conditionals in antecedents or consequents) and conjoined conditionals (like disjunctions and conjunctions of conditionals) without running into Lewis' triviality results. Within the frameworks of CPL and of CRQ, I investigate which connexive principles follow from the empty premise set. Specifically, I explain why Boethius' theses, Abelard's First Principle, and in particular Aristotle's theses do hold under coherence and why Aristotle’s Second Thesis does not hold under coherence. Although most of these connexive principles have a high intuitive appeal, they are neither valid in classical logic nor in many other logics (including those which were custom-made for conditionals). In the CPL and CRQ frameworks for uncertain conditionals, however, intuitively plausible connexive principles can be validated. The psychological plausibility of the proposed approach to connexivity is further endorsed by experimental psychological studies: an overview on the empirical evidence will conclude my talk.
14:30
Luis Estrada-González (Institute for Philosophical Research, National Autonomous University of Mexico (UNAM), Mexico) Claudia Tanús (Instituto de Investigaciones Filosóficas - UNAM, Mexico)
ABSTRACT. Connexive logics are motivated by certain ways in which connection between antecedent and consequent in a true conditional is perceived, and especially in one that is logically valid. According to Routley, if the connection that needs to happen between antecedent and consequent in a true conditional is understood as content connection, and if a connection in content is achieved if antecedent and consequent are mutually relevant, “the general classes of connexive and relevant logics are one and the same”. However, it is well known that, in general, connexive and relevant logics are mutually incompatible. For instance, there are results such as the logic resulting from adding Aristotle’s Thesis to B is contradictory, and that R plus Aristotle’s Thesis is trivial. Thus, even though there is a sense which is probably lenient of ‘connection between antecedent and consequent of a true (or logically valid) conditional’ that allows to group in the same family both connexive and relevance logics, what Sylvan called at some point ‘sociative logics’, the types of connection that each type of logic demands are very different one to the other, to the point of incompatibility.
A fruitful way to study logics in the relevant family was by means of certain principles about the form that theorems should have, especially the purely implicative ones, or the relevant characteristics that an acceptable logically valid proof should have. This paper is part of a broader ongoing project in which we are investigating if we can pin down the similarities and differences between connexivity and relevance through principles of this kind. The general hypothesis of this broader project is that we can indeed do so; in this paper, we will only show that some connexive logics imply principles that could be considered as extremely strong versions of the Variable Sharing Property. Furthermore, we will show that there are conceptual connections worthy of consideration amongst that type of principles and some characteristics of certain connexive logics, such as the number of appearances of a variable in an antecedent, in a consequent or in a theorem in general.
This is important because relevance logics and connexive logics have some common goals, such as finding a closer bond between premises and conclusions, but at the same time, as we have said, there are strong differences between both families of logics.
ABSTRACT. Abstract: Margaret Cavendish is insistent that matter and many of its qualities are 'all one thing', and that there are no accidents, forms or modes in nature. But it is very hard to see what she means by such claims, especially given the great variety of qualities that they concern. In this paper, I discuss Cavendish’s arguments that various qualities are 'all one thing' with matter, and offer interpretations of this claim first, for all qualities but motion, and second, for motion. I consider passages that suggest that qualities are identical with matter that is distinct from the matter that bears them, but argue that ultimately, Cavendish thinks that all qualities besides motion are nothing over and above patterns in the matter that bears them. I close with an account of what Cavendish means when she writes that motion is identical with body. Cavendish’s positions and arguments, I show, are situated interestingly among others in the history of philosophy, but they represent together a unique and radical approach to reductionism about accidents.
Theory of Impetus and its Significance to the Development of Late Medieval Notions of Place
ABSTRACT. In the discourse around theories explaining scientific progress, Natural philosophy of the Late Medieval period is seen as playing the role of apologetics. For philosophers of science, with their repudiation of metaphysics the task of providing a rational reconstruction of the way scientific progress has occurred is nigh impossible. Even explanations such as the Popperian and the Kuhnian strain under great difficulty and provide only partly satisfactory results. In his "Logik der Forshung" (1934), Karl Raimund Popper argues that metaphysics plays an accidental part in the emergence of new scientific ideas. Correspondingly, in "Structure of Scientific Revolutions" (1962) by carrying out theoretical interpretations and classification of empirical facts without their metaphysical premises Thomas Kuhn explains the development of Natural Science, but ignores changes of worldviews. As a result, he comes to the conclusion that Natural Science was formed under the influence of erroneous interpretations of Aristotelian Natural Philosophy made by Medieval Natural philosophers. These are some of the reasons why medievalists are still made to defend Late Medieval Natural philosophy from shallow convictions that at medieval universities, nothing of any significance to contemporary science and philosophy took place.
Seeking to render a fragment of a rational and coherent reconstruction of the development of Natural philosophy I investigate one idea of Late Medieval philosophy, the explanation of motion (impetus), and its relation to other ideas, such as the principle of parsimony, often associated with William of Ockham, and Nicole Oresme's notion of place. I base this inquiry on the presumption that two ancient Greek philosophical approaches toward the Natural world, Aristotelian and Platonic, were of crucial significance to the Medieval notion of Natural science. Not only do both programs have metaphysical underpinnings, but also theological premises. The main statement of the paper holds that the ideas of Late Medieval Natural philosophy have a decisive significance for the development of modern Natural science instead of accidental or negative one. Thus, following the two aforementioned programs, the premises of Jeans Buridan's theory of impetus are reconstructed. Then, the debates over the necessity of empty space are presented in the works of Nicole Oresme, William of Ockham and Jean Buridan, and finally, the pivotal role of these ideas in the modifications of Natural philosophy is ascertained.
ABSTRACT. In recent years, the contributions of cybernetics to the development of evolutionary developmental (evo-devo) biology have increasingly been recognised. The particular theories and models developed during the flourishing of cybernetics in the early 20th century laid the foundation for the systems approach, which is nowadays widely and fruitfully employed in molecular biology, genetics, genomics, immunology, developmental biology, and ecology. Nevertheless, no philosopher or biologist seems to know what cybernetics is, and often what they think they know they dislike: cybernetics is often identified with a reductive ‘machine conception’ of the organism and an engineering view of biology. However, once we understand what cybernetics is really about, we see such conceptions are mistaken and moreover that a cybernetic perspective can shed significant light on major discussions in current biology and its philosophy: in particular, on the fate of the Modern Synthesis in light of later developments in biology, the purpose and nature of evolutionary developmental biology, and disputes between those who emphasize a mechanistic conception of biology and ‘processualists’. Thus, my current research has two objectives: the first is to clarify the relationship between cybernetics and reductionism, and the second is to demonstrate the relevance of cybernetics to evo-devo. To accomplish the first objective, I will provide positive arguments for the thesis that, in contrast to the predominant view, cybernetic explanations within biology, when properly understood, are non-reductionistic, and do not have, at their core, any heavyweight metaphysical commitment to the mechanistic nature of life. To accomplish the second objective, I will disentangle the nature of cybernetics and reappraise its history in order to show how it offers new tools for approaching well-known neo-Darwinian controversies that have emerged in recent years.
Bridging Across Philosophy of Science and Scientometrics: Towards an Epistemological Theory of Citations
ABSTRACT. Citations are a crucial aspect of contemporary science (Biagioli 2018). Citation-based indicators such as the Journal Impact Factor are commonly employed by scientists to choose the publication venue for their articles, whereas indicators such as the h-index are used (and frequently misused) by university administrators to monitor and evaluate the research performance of academics. The implementation of performance-based research evaluation systems in many European and extra-European countries has further speeded up the proliferation of metrics in which citations are often a crucial component. Thus, scientometrics, the discipline that investigates the quantitative dynamics of citations in science, has risen from relative obscurity to play a major, and often much criticized, role within the science system (De Bellis 2014).
Unfortunately, citations have mostly escaped the attention of philosophers of science, maybe because they are relegated to the “context of discovery” of science (Leydesdorff 1998). Philosophers of science have not yet joined the discussion around a comprehensive theory of citations. This paper aims at beginning to close the gap between scientometrics and philosophy of science by advancing an epistemological theory of citations as a bridge between the two fields.
Firstly, I will present the two main competing theories of citation developed in the sociology of science: the normative theory and the socio-constructivist theory, grounded respectively in the normative and in the social constructivist approach in the sociology of science.
Secondly, I will show that these theories share the same explanandum as a target: they both assume that the key aim of a theory of citation is to uncover the motivations that scientists have for citing.
Thirdly, I will propose to shift the focus from the behavior of the scientists to the epistemological function of citations within scientific documents (Petrovich 2018). I will argue that citations can be considered as information channels between the citing document and the cited texts. In this way, the focus is no more on the motivations of scientists for citing (sociological perspective), but on the dynamic of scientific information that is made visible by citations (epistemological perspective).
Lastly, I will claim that the transformation of scientific information into scientific knowledge can be studied by analyzing the dynamics of the citation network of scientific documents. Drawing on the Khunian distinction of pre-paradigmatic, normal, and revolutionary science, I will argue that different citation structures characterize each of these phases. Thus, I will conclude that citation analysis is an important tool to investigate the epistemological development of scientific fields from the pre-paradigmatic to the normal period.
References:
Biagioli, Mario. 2018. ‘Quality to Impact, Text to Metadata: Publication and Evaluation in the Age of Metrics’. KNOW: A Journal on the Formation of Knowledge 2 (2): 249–75. https://doi.org/10.1086/699152.
De Bellis, Nicola. 2014. ‘History and Evolution of (Biblio)Metrics’. In Beyond Bibliometrics: Harnessing Multidimensional Indicators of Scholarly Impact, 23–44. London: MIT Press.
Petrovich, Eugenio. 2018. ‘Accumulation of Knowledge in Para-Scientific Areas: The Case of Analytic Philosophy’. Scientometrics 116 (2): 1123–51. https://doi.org/10.1007/s11192-018-2796-5.
14:30
Magdalena Malecka (Stanford University & University of Helsinki, United States)
CANCELLED: Why the behavioural turn in policy takes behavioural science wrong and what it means for its policy relevance
ABSTRACT. Insights from the behavioural sciences are currently reshaping much public policy around the world (Jones, Pykett & Whitehead 2013). Behavioural approaches are drawn upon in a variety of policy fields such as health and environmental policy, labour regulations, and consumer protection law. The behavioural policy units have been established worldwide, in the UK, US, Germany, France, the Netherlands, Australia, Japan and Singapore, as well as at the World Bank and among different teams within the United Nations, at the OECD, and the European Commission.
The application of behavioural research to policy is promoted as a way of making policies more effective, that is, formulating policies which achieve policymakers’ aims (Thaler & Sunstein 2008; Shafir 2012). Proponents of the application of the behavioural sciences to policy believe that behavioural research provides the scientific evidence needed to design effective policies. In particular, they claim that a subset of the behavioural sciences (cognitive psychology and behavioural economics) they rely on offers an ‘adequate’, ‘accurate’, or ‘realistic’ account of behaviour and therefore it should be a basis of policy design. They are wrong, however. There is no adequate, or accurate account of human behaviour that any approach within the behavioural sciences could provide.
In her most recent book (2013) Helen Longino presents an epistemological, ontological and social analysis of five approaches to studying aggressive and sexual behaviour, adopting a social epistemological methodology to understand the differences and similarities between them. Her work is an inquiry into the kind of knowledge that these sciences provide about human behaviour. Longino endeavours to understand what we can learn about the causal factors of behaviour from the accumulated knowledge produced by diverse approaches within the behavioural sciences. She argues that each approach represents the causal space differently and we cannot put them all together to achieve a complete causal explanation of a given behaviour. Each approach gives us only partial knowledge.
Longino’s analysis is important to understand why pluralism of behavioural findings is a challenge for practical applications of the behavioural sciences. The type of incommensurability that Longino demonstrates in her work (and which characterises most behavioural research, as I will argue and show) calls into question the idea that there are well-justified epistemic reasons for treating one of the behavioural approaches as the ‘adequate’, or ‘accurate’ one. This means that we have to completely rethink the widespread view on the ways in which findings from the behavioural sciences could, and should, inform policy. I intend to suggest how this could be done.
References
Jones, R., Pykett J., Whitehead M., (2013), Changing behaviours. On the rise of the psychological state. Cheltentham: Edward Elgar
Longino H. (2013), Studying human behaviour. How scientists investigate aggression and sexuality, University of Chicago Press
Shafir, E. (2012). The behavioral foundations of public policy. Princeton University Press.
Thaler, R.H. and Sunstein, C.R. (2008). Nudge: Improving decisions about health, wealth, and happiness. New Haven: Yale University Press
Piotr Błaszczyk (Institute of mathematics, Pedagogical University of Cracow, Poland, Poland)
On how Descartes changed the meaning of the Phytagorean theorem
ABSTRACT. 1. Euclid's Elements proposition I.47 includes the ancient Greek version of the Pythagorean theorem (PT), namely: ``In a right-angled triangle, the square on the side subtending the right-angle is equal to the [sum of] squares on the sides surrounding the right-angle''. While the interpolation ``sum of" characterizes modern translations, in the Greek mathematics, instead of reference to ``sum of" figures, there was rather a reference to figures themselves, like in the phrase: ``the square [...] is equal to the squares" (see [2] and [3]).
The proof of the proposition I.47 requires a theory of equal figures designed to compare non-congruent figures. The theory starts with the proposition I.35 and culminates in the construction of squaring a polygon, offered in the proposition II.14. As for its foundations, it builds on axioms named Common Notions, and on the so called geometrical algebra, developed throughout Book II. In modern mathematics, all these principles are covered by axioms for an ordered field. In [4] Hilbert shows that Euclid's theory of equal figures can be developed on a plane $\F\times \F$, where $\F$ is an order field (Archimedean or non-Archimedean) closed under the square root operation.
In sum, the Greek version of PT can be represented by the following scheme $\Box,\Box=\Box$, while the underlying theory is that of equal figures.
2. In modern mathematics, PT is represented by the algebraic formula $a^2+b^2=c^2$, where $a, b, c$ stand for real numbers that are measures (lengths) of sides of a right-angled triangle. In this case, the underlying theory is the arithmetic of real numbers accompanied by a tacit assumption that every segment is characterized by a real number; indeed, [4] provides hints how to prove that claim, rather than a complete proof.
3. We show that [1] includes a third version of PT that is in between the ancient and modern one. While Descartes for the first time in the history applied the formula $a^2+b^2=c^2$, with $a, b, c$ representing sides (line-segments) of
a right-angled triangle, the underlying theory is the arithmetic of segments rather than arithmetic of real numbers.
Surely, Descartes provides no explicite discussion of that new version of PT, while the very formula is applied implicitly, via references to diagrams.
We show that although Descartes' development is founded on Euclid's theory of similar figures (implicitly on the Archimedean axiom), when it comes to the field of segments it can be characterized as a real closed field (Archimedean or non-Archimedean).
4. We discuss the question of unity of PT: on what grounds these three versions as developed in three different mathematical contexts, i.e. theory of equal figures, arithmetics of segments, and arithmetic of real numbers, represent one and the same theorem. We argue that the implicit unity rests on a diagram representing a right-angle triangle.
References
1. Descartes, R., La G\'{e}om\'{e}trie, Lejda 1637.
2. Euclid, Elements; http://farside.ph.utexas.edu 2007.
3. Heiberg, J., Euclidis Elementa, Lipsiae 1883-1888.
4. Hilbert, D., Grundlagen der Geometrie, Leipzig 1903.
ABSTRACT. Ever since Euclid defined a point as "that which has no part" it has been widely assumed that points are necessarily unextended. It has also been assumed that, analytically speaking, this is equivalent to saying that points or, more properly speaking, degenerate segments—i.e. segments containing a single point—have length zero. In our talk we will challenge these assumptions. We will argue that neither degenerate segments having null lengths nor points satisfying the axioms of Euclidean geometry implies that points lack extension. To make our case, we will provide models of ordinary Euclidean geometry where the points are extended despite the fact that the corresponding degenerate segments have null lengths, as is required by the geometric axioms. The first model will be used to illustrate the fact that points can be quite large—indeed, as large as all of Newtonian space—and the other models will be used to draw attention to other philosophically pregnant mathematical facts that have heretofore been little appreciated, including some regarding the representation of physical space.
Among the mathematico-philosophical conclusions that will ensue from the above talk are the following three.
(i) Whereas the notions of length, area and volume measure were introduced to quantify our pre-analytic notions of 1-dimensional, 2-dimensional and 3-dimensional spatial extension, the relation between the standard geometrical notions and the pre-analytic, metageometric/metaphysical notions are not quite what has been assumed. Indeed, what our models illustrate is that, it is merely the infinitesimalness of degenerate segments relative to their non-degenerate counterparts, rather than the absence of extension of points, that is implied both by the axioms of geometry and these segments null lengths.
(ii) As (i) suggests, the real number zero functions quite differently as a cardinal number than as a measure number in the system of reals. So, for example, whereas a set containing zero members has no member at all, an event having probability zero may very well transpire, and perhaps more surprisingly still, a segment having length zero may contain a point encompassing all of Newtonian space.
(iii) Physicists and philosophers alike need to be more cautious in the claims they often make about how empirical data determines the geometrical structure of space.
Although the mathematico-philosophical import of our paper can be fleshed out using any of the standard formulations of Elementary Euclidean Geometry, for our purposes, it will be especially convenient to employ Tarski’s system P [1], in which only points are treated as individuals and the only predicates employed in the axioms are a three-place predicate B (where "Bxyz " is read y lies between x and z, the case when y coincides with x or z not being excluded) and a four-place predicate D (where "Dxyzu" is read x is as distant from y as z is from u).
Reference
[1] Tarski, Alfred, "What is Elementary Geometry?", in The Axiomatic Method, with Special Reference to Geometry and Physics, edited by Leon Henkin, Patrick Suppes and Alfred Tarski, Amsterdam: North-Holland Publishing Company, 1959, pp. 16-29.
Young E Rhee (Kangwon National University, South Korea)
On Howson’s Bayesian approach to the old evidence problem
ABSTRACT. The old evidence problem has been regarded as a typical intractable problem despite many challenges to it in Bayesian confirmation theory since 1980. Recently, Colin Howson (2017) proposes a new solution to the problem, which is considered as a part of the minimalist version of objective Bayesianism in that it focuses on priors and likelihood ratio.
Howson’s new solution has two pints. (a) Classical objective Bayesian approach has no difficulty in accommodating the historical fact that Mercury’s anomalous advance of perihelion confirmed GTR in a simple Bayes’s Theorem computation. (b) Counterfactual strategy is an attempt to graft objective probability valuations onto a subjective stock, and apart from being rather obviously ad hoc it does not work.
Howson’s new solution can work well in Bayesian framework, if we ignore the fact that he is a well-known subjective Bayesian and he suggested a solution, the counterfactual solution to the old evidence problem. However, when we consider it, his new solution appears as a very obscure idea. In this paper, I raise two problems to his new solution. (c) Generality problem: The new solution can work in solving the old evidence problem, but it cannot be applied to other problems such as Ravens paradox, Duhem’s problem, and the thesis of underdetermination of scientific theory by evidence? (d) Identity problem: It is not clear whether his position belongs to subjective Bayesianism or objective Bayesianism, or even likelihoodism. Howson’s new solution introduces objective factor to subjective Bayesianism, which leads ultimately to a hybrid Bayesianism, not to subjective Bayesianism.
Reference
Howson, C. 1991. “The Old Evidence Problem”. British Journal for the Philosophy of Science, 42, pp. 547~555.
Howson, C. Putting on the Garber Style? Better Not, Philosophy of Science, 84 pp. pp. 659–676.
On a structuralist view of theory change:study of some semantic properties in formal model of belief revision
ABSTRACT. Formal models of belief revision throw light on understanding the scientific theory change in general, and in particular the distinction between accidental and radical
(revolutionary) theory change. In this paper we present a structuralist
framework of belief revision. This is considered to be an extension of AGM
framework of belief revision with some semantic properties which constitute the
structure. The properties are causal properties, causal mechanism, causal
process, intervention. In the theories of belief revision based on classical
AGM~\cite{AGM:1985}, beliefs are being represented as a set of sentences that
are closed under logical implication. Operations for belief change such as
expansion, contraction and revision, are defined only with respect to belief
sets. As a result, structural information given by an epistemic entrenchment
ordering disappears after operations of belief change. Although the AGM model
is mathematically elegant, simple, it only merely serve as an idealized model
for belief change. Also, some of the postulates such as \emph{success(in case
of revision), recovery (in case of contraction)} are found to be unsatisfactory
in case of rational belief revision. \emph{Success Postulate} states that new
information needs to be reflected in the revised belief state of an agent, no
matter whether it is relevant or not. Hence, an agent has no choice of
rejecting the undesirable irrelevant information. Some of the other postulates
are also found to be controversial, counter-intuitive. Belief revision models
based on AGM are geared up for one step revisions of simple plain beliefs and
fails to handle cases where the new information comes in iterations and in the
form of conditional beliefs. In AGM all conditional beliefs have same
informational value as they are structurally 44 similar and hence there is no
difference between laws and accidental generalizations.
Keeping the above issues in mind the problem we are addressing is as follows:
how does an agent after discovering causal relations from the environment
revises her beliefs such that beliefs that are close to the \emph{core} are not
lost? The problem lies in finding a mechanism with which one can give up
conditional beliefs in the process of Belief revision while preserving the core
conditional beliefs such as laws or causal statements.
Following the constructive
approach to belief revision, we propose a new entrenchment ordering called
``causal epistemic entrenchment ordering'', which is constrained by the
structural and semantic properties such as structure, intervention, causal
properties, causal mechanisms. We show how the resulting ordering of beliefs are
influenced by these semantic factors. The resulting entrenchment ordering of
beliefs (CEE) would help us in ordering conditional beliefs.
2. preservation of structure in the iterated revisions 3. It can be
applied to the scientific theory change.
References:
1.Gärdenfors, Peter, and David Makinson. "Revisions of knowledge systems using epistemic entrenchment." Proceedings of the 2nd conference on Theoretical aspects of reasoning about knowledge. Morgan Kaufmann Publishers Inc., 1988.
2.Fermé, Eduardo, and Sven Ove Hansson. "AGM 25 years." Journal of Philosophical Logic 40.2 (2011): 295-331.
3. Stegmüller, Wolfgang. "Accidental (‘Non-Substantial’) Theory Change and Theory Dislodgment." Historical and Philosophical Dimensions of Logic, Methodology and Philosophy of Science. Springer, Dordrecht, 1977. 269-288.
4. Alchourrón, Carlos E., Peter Gärdenfors, and David Makinson. "On the logic of theory change: Partial meet contraction and revision functions." The journal of symbolic logic 50.2 (1985): 510-530.
5. Olsson, Erik J., and Sebastian Enqvist, eds. Belief revision meets philosophy of science. Vol. 21. Springer Science & Business Media, 2010.
ABSTRACT. We present a new extension of LCS4 logic of change and necessity. LCS4 (formulated in [2]) is obtained from the LC sentential logic of change, where changeability is expressed by a primitive operator (C… to be read: it changes that ….). LC was mentioned as a formal frame for a description of the so-called substantial changes analysed by Aristotle (disapearing and becoming of substances) ([1]). Another interesting philosophical motivation for LC (and LCS4) comes from the philosophy of Leibniz. In this case the considered changes concern global states of compossible monads. Both LC and LCS4 are interpreted in the semantics of linear histories of dichotomic changes. In the case of LCS4, in addition to the concept of C change, there is considered a second primitive notion: unchangeability represented by \Box. LC and LCS4 are complete in the intended semantics (proofs in [1], [2]). Semantically speaking, the subsequent changes cause the rise and passage of linear time. This is a well-known concept of both Aristotle and Leibniz. An idea to link LCS4 with the Leibnizian philosophy of change and time encourages us to introduce in its semantics the concept of a possible world understood as a global description of compossible atomic states of monads. A possible global state of compossible atomic states: j, k, l, …. of monad m may be represented by a conjunction of sentential constants: \alpha^m_j \land \alpha^m_k \land \alpha^m_l \dots. For every monad there may be considered many different global states, but only one of them is actual. The possible worlds of m which are not in contradiction with the actual world of m are mutually accessible. These actual atomic states of m, which occur in all possible worlds of m which are accessible from the actual one, are necessary in the sense of our new modal operator N\Box. If any actual state of m occurs in at least one possible world of m accessible from the actual one, we say that it is possible in the actual world in sense of N\Diamond. Regardless of simultaneous competitive global states of m, each of them may change in a sense of C or may be unchangeable in a sense of \Box. Our new semantics contains many linear possible histories of C changes and \Box durations, which generate the flow of time. At the syntactical level we extend LCS4 by new axioms containing primitive symbols N\Box, C, \Box. We prove a completeness theorem for our new logic. In this frame we can also explicate the specific concept of the Leibnizian hypothetical necessity. It may be said that these sentences are hypothetically necessary, which are general laws of possible worlds ‘analogous to the laws of motion; what these laws are, is contingent, but that they are such laws is necessary’ [3, 69]. In our terms a sentence \alpha is hypothetically necessary in a possible world w iff \alpha is unchangeable in some world accessible from w, this means: N\Diamond\Box\alpha is true in w.
[1] Świętorzecka, K., Czermak, J., (2012) “Some Calculus for a Logic of Change”, Journal of Applied
Non-Classical Logic, 22(1):1–8; [2] (2015) “A Logic of Change with Modalities”, Logique et Analyse, 232, 511–527; [3] Russell, B., (1900), A critical exposition of the philosophy of Leibniz, Cambridge University Press.
Peter Simons (Department of Philosophy, Trinity College Dublin, Ireland, Ireland)
Leśniewski, Lambda, and the Problem of Defining Operators
ABSTRACT. The aim of this paper is to tackle a metalogical problem (uncharacteristically) left unsolved by Stanisław Leśniewski (1886–1939).
Leśniewski considered definitions to be object-language equivalences adding to his nominalistically-conceived logical systems. He therefore took extreme care about the conditions governing proper definitions, and regarded his “definition of definitions” as his greatest achievement. His canons of definition, for the two systems of protothetic ([1], 479–81) and ontology ([2], 613–5, 619–22) applied only to sentences, names, and functors with syntactic categories ultimately defined in terms of sentences and names. Among variable-binding operators however, his “official” systems used only the universal quantifier, considered as syncategorematic. While in everyday use he also employed its dual, the particular quantifier, he found himself unable to formulate canons of definition for operators, and his offer of any degree to any student who could do so went unclaimed. The question then remains open as to whether adequate conditions for the introduction of new variable-binding operators in logic can be given in a Leśniewskian fashion.
Since the time of Ajdukiewicz ([3], 230 f.) and Church it has been realised that a general abstraction operator in conjunction with suitable functors can be deployed to define other operators. The system in Church’s version of the simple theory of types” [4] is exemplary in this regard. Church’s promising technique cannot be applied directly to Leśniewski’s systems however, for one thing because it omits polyadic functors, but more crucially because it presupposes a platonistic infinity of formulas and axioms, which is inimical to Leśniewski’s nominalistic stance.
In this paper we adapt Church’s typed lambda calculus to extend Leśniewski’s language with general abstraction operators, each assigned to a syntactic category, using them alone for binding variables, and combining them with functors to allow operator definitions. The idea is that Leśniewski’s canons of definition for functors together with the rules for lambda will allow defined operators to be introduced in a controlled way. The question of the conformity of this combination to the strict requirements of Leśniewski’s metalogic will then be considered.
References
[1] S. Leśniewski, Fundamantals of a New System of the Foundations of Mathematics, in Collected Works, Vol. II, Dordrecht: Kluwer, 1992, 410–605.
[2] S. Leśniewski, [1] S. Leśniewski, On the Foundations of Ontology, in Collected Works, Vol. II, Dordrecht: Kluwer, 1992, 606–628.
[3] K. Ajdukiewicz, Syntactic Connexion. In: S. McCall, ed., Polish Logic 1920–1939. Oxford: Clarendon Press, 1967, 207–231.
[4] A. Church, A Formulation of the Simple Theory of Types. The Journal of Symbolic Logic 5 (1940), 56–68.
14:30
Mihai Hîncu (Faculty of Political Sciences, Letters and Communication, Valahia University of Târgoviște, Romania)
Intensionality, Reference, and Strategic Inference
ABSTRACT. In this paper I focus on the linguistic constructions in which intensional transitive verbs (ITVs) occur. In order to present the complex logical profile of the ITVs, I contrast them with the extensional transitive verbs and I inspect the semantic differences of their inferential behaviour with respect to the existential entailment, to the substitution salva veritate of the coreferential terms and to the semantic specificity. Insofar as the sentences containing ITVs are structurally ambiguous between an intensional, de dicto interpretation, and an extensional, de re interpretation, they threaten the coordination between agents. For this reason, if the speaker uses an ITV in one way and the hearer’s reading of it diverges from the speaker’s intended meaning, both agents have a coordination problem. In this regard, using the mathematical framework of games of partial information, I will show what conditions have to be satisfied in order for a rational speaker and a rational hearer to efficiently communicate with a sentence in which an ITV occurs, and to converge on the right interpretation when the sentence’s surface syntax is probabilistically silent about which of the de re or de dicto meaning the hearer has to choose in order to coordinate with the speaker. Furthermore, I will look at the semantic behaviour of the most emblematic verb of the semantic lexicon, „refers (to)”, through the prism of the semantic features considered definitory for the extensional idiom and I will show that there are strong theoretical reasons for considering that it belongs to the class of ITVs and that it generates structural ambiguity at each level of language in which it is involved. In this regard, I show how the conceptual framework of the games of partial information can successfully accommodate the semantic verb’s inherent ambiguity. In order to accomplish this task, I present a scenario of strategic communication involving semantic verbs and I model it as a two-agent coordination game. The cognitive dynamics peculiar to the agents’ interaction will be presented on the background of some reasonable assumptions introduced to guarantee the finding of a solution concept corresponding to the game. In this regard, after I let the utility functions be sensitive to the agents’ preferences for more economical expressions, I compute the expected utilities of the strategic profiles, I determine a set of Nash equilibria and I show that the game’s unique solution can be equated with that set’s member which passes the test of Paretian efficiency. The present model highlights the mutually recursive way in which each agent reasons about the other agent’s probabilistic reasoning, and it manages to integrate the uncertainty involved in the successful communication with the reasoning about reasoning process involved in the language production and interpretation. In the end, I present the picture of the interplay between the speaker’s reference and the semantic reference which emerges from the game-theoretical framework adopted here and I will show some of its methodological consequences related to the way in which the key concept of reference has to be theoretically framed.
The HaPoC symposium "Philosophy of Big Data" is submitted on behalf of the DLMPST, History and Philosophy of Computing division
The symposium devoted to a discussion of philosophical problems related to Big Data, an increasingly important topic within philosophy of computing. Big Data are worth studying from an academic perspective for several reasons. First of all, ontological questions are central: what Big Data are, whether we can speak of them as separate ontological entity, and what their mereological status is. Second, epistemological ones: what kind of knowledge do they induce, and what methods do they require for accessing valuable information.
These general questions have also very specific counterparts raising series of methodological questions. Should data accumulation and analysis follow the same general patterns for all Sciences, or should those be relativized to particular domains? For instance, shall medical doctors and businessmen focus on the same issues related to gathering of information? Is the quality of information similarly important in all the contexts? Can one community be inspired by experience of another? To which extent human factors influence information that we issue from Big Data?
In addition to these theoretical academic issues, Big Data represents also a social phenomenon. “Big Data” is nowadays a fancy business buzzword, which - together with "AI" and "Machine Learning" – shapes business projects and the R&D job market, with data analysts among the most attractive job titles. It is believed that "Big Data" analysis opens up unknown opportunities and generates additional profits. However, it is not clear what counts as Big Data in the industry and critical reflection about it seems necessary.
The proposed symposium gathers philosophers, scientists and experts in commercial Big Data analysis to reflect on these questions. We believe that the possibility to exchange ideas, methodologies and experiences gathered from different perspectives and with divergent objectives, will enrich not only academic philosophical reflection, but will also prove useful for practical - scientific or business - applications.
On the epistemology of data science – the rise of a new inductivism
ABSTRACT. Data science, here understood as the application of machine learning methods to large data sets, is an inductivist approach, which starts from the facts to infer predictions and general laws. This basic assessment is illustrated by a case study of successful scientific practice from the field of machine translation and also by a brief analysis of recent developments in statistics, in particular the shift from so-called data modeling to algorithmic modeling as described by the statistician Leo Breiman. The inductivist nature of data science is then explored by discussing a number of interrelated theses. First, data science leads to the increasing predictability of complex phenomena, especially to more reliable short-term predictions. This essentially follows from the improved ways of storing and processing data by means of modern information technology in combination with the inductive methodology provided by machine learning algorithms. Second, the nature of modeling changes from heavily theory-laden approaches with little data to simple models using a lot of data. This change in modeling can be observed in the mentioned shift from data to algorithmic models. The latter are in general not reducible to a relatively small number of theoretical assumptions and must therefore be developed or trained with a lot of data. Third, there are strong analogies between exploratory experimentation, as characterized by Friedrich Steinle and Richard Burian, and data science. Most importantly, a substantial theory-independence characterizes both scientific practices. They also share a common aim, namely to infer causal relationships by a method of variable variation as will be elaborated in more detail in the following theses. Fourth, causality is the central concept for understanding why data-intensive approaches can be scientifically relevant, in particular why they can establish reliable predictions or allow for effective interventions. This thesis states the complete opposite of the popular conception that with big data correlation replaces causation. In a nutshell, the argument for the fourth thesis is contained in Nancy Cartwright’s point that causation is needed to ground the distinction between effective strategies and ineffective ones. Because data science aims at effectively manipulating or reliably predicting phenomena, correlations are not sufficient but rather causal connections must be established. Sixth, the conceptual core of causality in data science consists in difference-making rather than constant conjunction. In other words, variations of circumstances are much more important than mere regularities of events. This is corroborated by an analysis of a wide range of machine learning algorithms, from random trees or forests to deep neural networks. Seventh, the fundamental epistemological problem of data science as defined above is the justification of inductivism. This is remarkable, since inductivism is by many considered a failed methodology. However, the epistemological argument against inductivism is in stark contrast to the various success stories of the inductivist practice of data science, so a reevaluation of inductivism may be needed in view of data science.
15:45
Domenico Napoletani (Chapman University, United States) Marco Panza (Chapman University and CNRS, IHPST (CNRS and Univ. of Paris 1, Pantheon-Sorbonne), United States) Daniele C. Struppa (Chapman University, United States)
Finding a Way Back: Philosophy of Data Science on Its Practice
ABSTRACT. Because of the bewildering proliferation of data science algorithms, it is difficult to assess the potential of individual techniques, beyond their obvious ability to solve the problems that have been tested on them, or to evaluate their relevance for specific datasets.
In response to these difficulties, an effective philosophy of data science should be able not only to describe and synthesize the methodological outline of this field, but also to project back on the practice of data science a discerning frame that can guide, as well as be guided by, the development of algorithmic methods. In this talk we attempt some first steps in this latter direction.
In particular, we will explore the appropriateness of data science methods for large classes of phenomena described by processes mirroring those found in developmental biology.
Our analysis will rely on our previous work [1,2,3] on the motifs of mathematization in data science: the principle of forcing, that emphasizes how large data sets allow mathematical structures to be used in solving problems, irrespective of any heuristic motivation for their usefulness; and Brandt's principle [3], that synthesizes the way forcing local optimization methods can be used in general to build effective data-driven algorithms.
We will then show how this methodological frame can provide useful broad indications on key questions of stability and accuracy for two of the most successful methods in data science, deep learning and boosting.
[1] D. Napoletani, M. Panza, and D.C. Struppa, Agnostic science. Towards a philosophy of data analysis, Foundations of Science, 2011, 16, pp. 1--20.
[2] D. Napoletani, M. Panza, and D.C. Struppa, Is big data enough? A reflection on the changing role of mathematics in applications. Notices of the American Mathematical Society, 61, 5, pp. 485--490, 2014.
[3] D. Napoletani, M. Panza, and D.C. Struppa, Forcing Optimality and Brandt's Principle, in J. Lenhard and M. Carrier (ed.), Mathematics as a Tool, Boston Studies in the Philosophy and History of Science 327, Springer, 2017.
Can we add kappa-dominating reals without adding kappa-Cohen reals?
ABSTRACT. I will discuss the question under which circumstances forcings which add a kappa-dominating real (i.e., an element of the generalized Baire space kappa^kappa that is eventually above all ground model elements) also add a kappa-Cohen real.
Using an infinite game of length (only) omega, we show that this is indeed the case for a large class of forcing notions, among them all Laver type forcings on kappa^(
In any case, the results show that the situation on kappa is very different from the classical setting on omega where it is easy to add (omega-)dominating reals without adding (omega-)Cohen reals, e.g., by ordinary Laver forcing, or by ordinary Mathias forcing, etc.
Higher Metrisability in Higher Descriptive Set Theory
ABSTRACT. The problem of generalising the real line continuum has a long standing history in mathematics. Mathematicians have proposed different generalisations of the real line for many different purposes, see, e.g., [2]. Particularly interesting in this context is the work of Sikorski, see, e.g., [5]. His idea was that of generalising the real line by using a version of the classical Dedekind construction of the real line in which he substituted the natural numbers with sufficiently closed ordinal numbers. In particular, for an infinite cardinal $\kappa$ Sikorski's construction allows to define a real closed field extension of the real line which we will call ordinal real numbers over $\kappa$.
Fixing a regular cardinal $\kappa$ and substituting in the classical definition of metric the real line with the ordinal real numbers over $\kappa$, one obtains a very well-behaved theory which naturally generalises the classical theory of metric spaces. We will call this theory the theory of $\kappa$-metric spaces. Using the theory of $\kappa$-metric spaces one can naturally generalise many notions and results from the classical theory of metric spaces, see, e.g., [6].
Descriptive set theory is one of the main branches of set theory whose main goal is the study of particularly well behaved subsets of the real line. One of the main tools of descriptive set theory Baire space, i.e., the space of countable sequences of natural numbers.
In the last few years set theorist have started a systematic study of generalisations of Baire spaces to uncountable cardinals, see, e.g., [4]. One of the first steps in this process is of course that of generalise classical notions and results from descriptive set theory. Particularly important in this context is the notion of Polish space, i.e., completely metrisable separable spaces.
In this talk we will show preliminary results from [3] on how the theory of $\kappa$-metric spaces can be used in generalising the notion of Polish space. To do so we will first introduce the theory of $\kappa$-metric spaces. Then we will give the definition of $\kappa$-Polish space and present some basic results about these spaces. Finally we will consider the game theoretical generalisation of Polish space introduced by Coskey and Schlicht in [1] and we will show that, contrary to the classical case, the game theoretical and the metric notions do not coincide in general.
[1] S. Coskey and P. Schlicht. Generalized Choquet spaces. Fundamenta Mathematicae, 232(3):227–248, 2016.
[2] P. Ehrlich. Real Numbers, Generalizations of the Reals, and Theories of Continua, volume 242 of Synthese Library. Springer-Verlag, 1994.
[3] L. Galeotti. The theory of the generalised real numbers and other topics in logic. PhD thesis, Universit ̈at Hamburg, 2019.
[4] Y. Khomskii, G. Laguzzi, B. L ̈owe, and I. Sharankou. Questions on generalised Baire spaces. Mathematical Logic Quarterly, 62(4-5):439–456, 2016.
[5] R. Sikorski. On an ordered algebraic field. Comptes Rendus des S ́eances de la Soci ́et ́e des Sciences et des Lettres de Varsovie Classe III: Sciences Math ́ematiques et Physiques, 41:69–96, 1948.
[6] R. Sikorski. Remarks on some topological spaces of high power. Fundamenta Mathematicae, 37(1):125–136, 1950.
On the constructive content of proofs in abstract analysis
ABSTRACT. Can a proof in analysis that does not refer to a particular
constructive model of the real numbers have computational content?
We show that this is the case by considering a formulation of the
Archimedean property as an induction principle: For any property P
of real numbers, if for all x, (x > 0 -> P(x-1)) -> P(x),
then for all x, P(x). This principle is constructively valid
and has as computational content the least fixed point combinator, even
though real numbers are considered abstract, that is, only specified by
the axioms of a real closed field. We give several applications of this
principle connected with concurrent computation.
ABSTRACT. In earlier work it has been shown that the well-known DPLL SAT solving algorithm
can be extracted from a soundness and completeness proof
of the corresponding proof system. We carry this work further
by showing that also program optimisation techniques such as clauselearning
can be obtained by a transformation on the proof level.
Handling of defectiveness in a content-guided manner
ABSTRACT. The following position will be described and defended. It implicitly answers most questions from the symposium CFP. Cases of straightforward falsification will be neglected in order to concentrate on some more difficult matters.
Two types of substantial defects may be distinguished in theories, conceptual ones and empirical ones. Concepts may be defective because they are ambiguous or vague, or because they have false or unsuitable presuppositions. Thus the concept presupposing that, at each point in time, an object is at rest in a spatial position is defective and leads to Zeno's paradoxes. Empirical defects are evoked by the fact that the involved empirical criteria are ambiguous or vague or by the presence of multiple criteria for the same predicate.
All substantial defects of theories surface as inconsistencies, a feature which may mask the nature of the defect. There is no general methodology to eliminate defects and it is possible that, in a given historical situation and even at the proverbial end of time, the best theories are defective, even unavoidably so.
Although the sources of the defects may be wildly diverse, an approach in terms of adaptive logics [1] facilitates locating the potential defects as described in the previous paragraphs. The approach considers the different potential formal logical defects: gluts or gaps in logical symbols or ambiguities in non-logical symbols. It was shown [2] that there is a recursive method for obtaining those formal logical defects, which next are minimized. The result is a minimally abnormal 'interpretation' of the defective theory. Each of these interpretations connects a substantial defect to certain linguistic entities. The described method is recursive, but only after certain choices have been made. Such choices may be justified by former experience with removing defects and by properties of the involved theory.
While the described process may be considered as a general method, further steps towards removing defects require substantial and material investigation: choosing between the interpretations, determining whether the defect is conceptual or empirical, and modifying concepts or empirical criteria. So the methodology is content guided in that we 'learn how to learn' [3].
The content-guided character will be underpinned by features of actual adaptive (viz. minimally abnormal) theories. These are mathematical theories, but the conclusion to be drawn is more general as well as perfectly transparent.
Pluralism enters the picture in that every defect has several potential solutions, each of which may result in a more or less viable theory. So the approach furthers a pluralism of alternatives. This is different from the epistemological pluralism that is typical for problem solving processes.
References
[1] D. Batens. A universal logic approach to adaptive logics. Logica Universalis, 1:221-242, 2007.
[2] D. Batens. Spoiled for choice? Journal of Logic and Computation, 26(1):65-95, 2016. E-published 1913: doi:10.1093/logcom/ext019.
[3] D. Shapere. Logic and the philosophical interpretation of science. In P. Weingartner (ed.), Alternative Logics. Do sciences need them?, pages 41-54. Springer, Berlin, Heidelberg, 2004.
CANCELLED: Chunk and Permeate: Reasoning faute de mieux
ABSTRACT. The standard way to use a logical theory to model reasoning is to identify premises and a consequence relation, and show that the reasoning done can be modeled as a correct inference (or a collection of correct inferences) from those premises, made in accord with the proposed consequence relation. In many contexts such models of reasoning work quite well. But in some, trouble arises.
Newton’s calculus is one such; calculating the differential for a function at a given value the function’s argument seems to involve division by 0. The mathematical methods available to Newton and his supporters could not countenance this practice, but neither could they avoid it: “Allez en avant, et la foi vous viendra,” declared d’Alembert. Old quantum theory is another; Bohr’s account of the hydrogen atom includes an electron that doesn’t radiate energy as it orbits the nucleus, but only when it shifts from one allowed energy state to another (and then at a fixed frequency determined by Planck’s constant and the energy difference between the two states.
In both cases, there is a ‘transition point’ in the reasoning: a previously relied-on commitment (that dx 0, or that electrodynamics does not apply) is dropped, and reasoning continues in a new context where dx = 0, and the radiation emitted by hydrogen atoms as they go from higher to lower energy ‘stationary states’ is described by appeal to classical electrodynamics. The premises relied on in these examples are inconsistent. But the sentences applied on each side of these transitions are (or at least appear to be) consistent. This observation was the motive for a general approach to modelling this kind of reasoning called chunk and permeate (C&P). Chunk and permeate models restrict application of premises by confining their use to separate chunks. Specified types of consequence derived in each chunk are allowed to permeate into certain other chunks. The consequences of a C&P structure are the sentences appearing in the designated chunk, in the limit of a series of steps at each of which all chunks are closed under consequence and any contents of each chunk that are allowed to permeate to other chunks do so.
There are many reasons for tolerating inconsistency. When we can’t consistently explain how a successful pattern of reasoning works, as in the case of the old calculus, we do it faute de mieux, until something like the concept of a limit, or quantum mechanics comes along.
References:
Brown, Bryson, “Paraconsistency, Pluralistic Models and Reasoning in Climate Science”, 179-194 in Humana Mente, Vol 32 special volume, Beyond Toleration? Inconsistency and Pluralism in the Empirical Sciences, Maria Martinez-Ordaz and Luis Estrada-Gonzalez, eds.
Brown, M. Bryson and Priest, Graham, “Chunk and Permeate II: Bohr’s Hydrogen Atom,” European Journal for Philosophy of Science, 5, 3 297-314, October 2015.
Brown, Bryson and Priest, Graham, “Chunk and Permeate,” Journal of Philosophical Logic, 33, 4 August 2004 379-388.
Laprise, René, 2008 ``Regional climate modelling’’, Journal of Computational Physics 227 3641–3666.
Andreas Kapsner (Ludwig Maximilian University of Munich, Germany)
Connexivity and Conditional Logic
ABSTRACT. In this talk, I will comment on the relationship between connexive logics and conditional logics in the Lewis/Stalnaker theory. In particular, I will be interested in the philosophical underpinnings of these two large projects and in how much these underpinnings intersect. Though it has always been clear that there seems to be some connection here, it has, I believe, not yet been established what that connection is, precisely. I will propose a view of connexivity (drawing on earlier work) that not only fits well to the philosophical discussion about conditonal logics, but is also able to shed new light on topics in that discussion, such as the dispute about the Law of Conditonal Non-Contradiction.
Towards a bridge over two approaches in connexive logic
ABSTRACT. One of the approaches in connexive logic suggested by Heinrich Wansing captures connexivity through a nonstandard falsity condition for the conditional. More specifically, Wansing suggests to take the condition of the form “if A is true then B is false” rather than the condition of the form “A is true and B is false” as the falsity condition for the conditional of the form “if A then B”, where truth and falsity are not necessarily exclusive. This simple idea was first formulated as a variant of Nelson’s logic N4. Some later developments observed that the central idea of Wansing does not rely on N4. Indeed, Wansing’s idea works in the context of a four-valued logic, a three-valued logic, and even in the context of weak relevant logics, namely the basic relevant logic BD of Graham Priest and Richard Sylvan, as well as in the context of conditional logics.
It should be noted that as a byproduct, connexive logics formulated a la Wansing will include the converse direction of Boethius’ theses as valid/derivable theses. Of course, these formulas are not required for connexive logics in general. In fact, these formulas are sometimes criticized. However, as Priest claims, Wansing’s system is most likely to be “one of the simplest and most natural.”
Another approach to connexivity in the literature is the one through experimental philosophy. Niki Pfeifer marked the first contribution towards this direction with a more general aim to “extend the domain of experimental philosophy to conditionals”. The particular focus is on Aristotle’s theses, and Pfeifer proposes an interpretation of Aristotle’s theses based on coherence based probability logic and offers a justification for Aristotle’s theses.
The present note focuses on another paper by Paul Egre and Guy Politzer who carried out an experiment related to the negation of indicative conditionals. In particular, they consider weak conjunctive and conditional formulas of the form “A and possibly not B” and “if A then possibly not B” respectively, beside the more well-discussed strong conjunctive and conditional formulas of the form “A and not B” and “if A then not B” respectively, as formulas equivalent to “not (if A then B)”. Many of the debates on the negation of conditionals focused on the strong forms and discussed whether the conjunctive formula is appropriate or the conditional formula is appropriate. However, Egre and Politzer challenge the debate by suggesting that we should also take into account of the weak forms, not only the strong forms.
Based on these backgrounds, the general aim behind this talk is to see if we can bridge the above approaches in connexive logics. The more specific aim is to observe that the formulas considered by Egre and Politzer can be formalized in a rather natural manner by following the idea of Wansing to consider falsity conditions of the conditional. To this end, we make use of modal logics that expand Nelson’s logic N4 developed by Sergei Odintsov and Heinrich Wansing, and offer some observations.
The year 2019 is the International Year of the Periodic Table (IYPT), celebrating the 150th anniversary of its year of discovery, and the International Union for History and Philosophy of Science and Technology (IUHPST) is one of the supporting institutions of IYPT.
With this event at CLMPST 2019, we aim to offer all participants of the congress, independent of whether they are working in philosophy of chemistry or not, an insight into the relevance and important of the Periodic Table. The event consists of talks for a general academic audience, with a non-technical historical introduction by Hasok Chang, two personal reflections by current or recent graduate students in philosophy of chemistry, and a local point of view by a expert from Prague. The session will be chaired by Gisela Boeck.
Why should philosophers care about the periodic table?
ABSTRACT. The periodic table of chemical elements is one of the most recognizable
icons in the entire history of science. Its ubiquitous presence in all kinds
of scientific environments and even popular culture worldwide is a clear
testimony to its usefulness and informativeness. But what makes the periodic
table so special? Mendeleev was by no means the only chemist to attempt to
make a convenient and informative ordering of chemical elements, and I will
present a brief overview of the history of such attempts in the 19th
century. I will also present some debates concerning the epistemic merits of
Mendeleev?s system, and show how the history of the periodic table can be used
to make effective illustrations of epistemic values in action, focusing
especially on explanation and prediction.
Mendeleev’s dedicated supporter and friend. The Czech chemist Bohuslav Brauner and the worldwide reception of the periodic system
ABSTRACT. Bohuslav Brauner (1855-1935), pupil of Robert Bunsen in Heidelberg, and Henry Roscoe in Manchester, was appointed extraordinary professor of inorganic chemistry in 1890 and full professor in 1897 at the Czech Charles-Ferdinand University in Prague. As early as in the 1870s, when he was still student at the Prague University, Brauner became enthusiastic promoter of the Periodic System. His contacts with Mendeleev started at Brauner’s initiative in 1881, when he sent to the Russian chemist a letter with the reprint of a paper published jointly with his English colleague John I. Watts. In this article Brauner referred to Mendeleev’s “Osnovy khimii”, and “expressed regret that this excellent treatise was quite unknown in western countries”. Mendeleev answered with a long letter and sent Brauner his photograph. This was the beginning of their correspondence, cooperation, and personal encounters that lasted until Mendeleev’s death in 1907. Brauner devoted his life-long research to the exemplification and perfection of Mendeleev’s Periodic Law, especially to the placement of the rare earth in the Periodic Table, estimation and revisions of atomic weights of some elements and (unsuccessful) search for new elements predicted by the Periodic Table. His publicizing of the Periodic System and international scientific authority led to the almost unconditional acceptance of Mendeleev’s system in the Czech chemical community as early as in the 1880s, and supported the process of its dissemination in other European countries and even in Japan. Brauner’s motivations were not only scientific; they also had political and social connotations due to the rising anti-German nationalism in the Czech society where the prevalent Russophilia played an important role in the reception of Mendeleev’s teaching.
Yukinori Onishi (The Graduate University for Advanced Studies, Japan)
Does research with deep neural networks provide a new insight to the aim of science debate?
ABSTRACT. Neural network, or so called `deep neural networks’ (DNN) in particular, is one of the rapidly growing research areas, and now it is applied to many fields of science, including physics, biology, chemistry, medicine, and economics. One of the great features of DNN is that it can automatically learn features in the data set that are relevant to the given task (such as classification or prediction). On the other hand, it is not necessarily easy for humans to interpret the `reasoning’ DNNs are performing in order to make correct predictions/classifications, a feature that Hooker and Hooker (2018) calls `naked prediction.’
As such, the increasing use of this technology in various branches of science seems to pose an interesting question on the debate about `the aim of science.’ This debate originated from Bas van Fraassen’s (1980) famous characterization of the scientific realism debate as opposing hypotheses about the aim of science. According to it, realists hold that science aim to obtain the true description of the world, while anti-realists (or constructive empiricists) hold that science aims at obtaining the empirically adequate theory, i.e. a theory that correctly describes the observable part of the world. Importantly, what is at stake here is not actual scientists’ intention but the possibility of rational reconstruction of scientific practice as aiming at these goals. In other words, this is a debate about the accountability of scientific practice from realists’ and constructive empiricists’ perspectives.
Given the `naked’ nature of DNNs, its use in scientific research seems to support constructive empiricism, for it seems inexplicable from realists’ perspective. This is a remarkable implication because, when Van Fraassen (1980) demonstrated how various scientific practice and the reasoning behind them can be reconstructed from constructive empiricisits’ point of view, his aim seemed at most creating underdetermination about the aim of science. But now, we seem to have evidence (i.e. a scientific practice) that supports constructive empiricism over realism. In this talk, however, I will point out the possibility that a certain type of research with DNN can provide evidence exclusively for scientific realism.
References
Hooker, G. and C. Hooker. (2018). “Machine Learning and the Future of Realism.” Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1: 174-182.
Van Fraassen, B.C. (1980). The Scientific Image. Oxford University Press.
Process, not just product: the case of network motifs analysis
ABSTRACT. It has been ubiquitous for scientists to study complex systems by representing these systems as networks and analyzing properties of these networks. Since this network approach allows scientists to focus on all relevant elements in a system and their interactions, in contrast to individual elements in isolation, networks are particularly useful in understanding complex systems, for example, cellular networks studied in systems biology and social networks studies in sociology.
The network approach has drawn attention from philosophers. For example, recent papers have considered the question of how network explanations relate to mechanistic explanations. Some philosophers like Levy and Bechtel (2013) and Craver (2016) view network explanation as a desirable extension of mechanistic explanation, while others like Huneman (2010) and Woodward (2013) reject this view, emphasizing novel features presented in network explanations. However, these discussions are all about the products, namely, explanations generated by the network approach.
I will argue that a focus on explanations is insufficient, especially for understanding how the network approach deals with complex networks. My main example is the network motifs analysis, which has been a focus of some recent discussions (Levy and Bechtel 2013; Craver 2016; Brigandt, Green and O’Malley 2017).
Networks are webs of nodes and edges. When a few-node pattern recurs in different parts of a network much more frequently than expected, this pattern is called a network motif in this network. The general occurrence of network motifs in real networks has been taken to indicate some deep mechanisms governing the evolution of real networks, for example natural selection, and the ultimate task of the network motifs analysis is to reveal these mechanisms.
By analyzing the process of the network motifs analysis, I argue:
1. What lies at the heart of the network motifs analysis is the fact that a pattern's being identified as a network motif depends not only on its own properties, but also on its relationships to the target network and the random networks.
2. This dependence on the target network and random networks is not reflected in the explanations, since these explanations are results of modelling network motif patterns independently of the target networks and the random networks.
As a result, an explicit focus on the process is indispensable for understanding how exactly the network approach deals with complex networks, which cannot be captured by focusing on explanations themselves.
Reference (selected):
Brigandt, I., Green, S., & O'Malley, M. (2016). Systems biology and mechanistic explanation, in Glennan, S., & Illari, P. M. (Eds.). (2017). The Routledge handbook of mechanisms and mechanical philosophy. Taylor & Francis.
Craver, C. F. (2016). The explanatory power of network models. Philosophy of Science, 83(5), 698-709.
Huneman, P. (2010). Topological explanations and robustness in biological sciences. Synthese, 177(2), 213245.
Levy, A., & Bechtel, W. (2013). Abstraction and the Organization of Mechanisms. Philosophy of Science, 80(2), 241261. https://doi.org/10.1086/670300
Woodward, J. (2013). II Mechanistic explanation: Its scope and limits. In Aristotelian Society Supplementary Volume (Vol. 87, No. 1, pp. 39-65). Oxford, UK: Blackwell Publishing Ltd.
Multi-modal Mu-calculus with Postfix Modal Operator Abstracting Actions
ABSTRACT. With respect to Hennessy-Milner Logic ([2]) classical for abstract state concepts, taking the meaning of a formula as a state set, my talk in CLMPS 2015 ([3]) was concerned with the denotation of actions where the actions (as functions) are implemented at the state and state transitions are affected by the applied actions. The purpose was to look for the meanings of actions accompanied with state transitions.
In my works (Logic Colloquium 2016 and 2017), multi-modal mu-calculus is formulated, where a prefix modality for communication, a postfix modality for terms or propositions (as action constructions of functional or logic programming), a negation denoting incapability of interaction (virtually with human), truth and propositions, the classical negation, and a least fixed point operator "mu" may organize formulas (denoting state sets in the transition system of the calculus, extending modal logic interpretations). It is for a representation of human-machine interaction that a postfix modal operator, as well as the prefix modal one, is defined with consistency to fixed point operator (mu-operator). Intuitively, the communication (from human) is to be demonstrated in prefix modality, while the action (of corresponding machinery) is to be represented in postfix modality. As a next step to abstracting actions, the postfix operator is to contain a class of propositional formulas (whose form is a "set" containing a conjunction of literals preceding both an implication and a succeeding literal). The form may be sometimes of use in programming. Its model is closely related to refutation procedures as program implementations even in "3-valued logic".
In this talk, from an algebraic view, Heyting algebra expressions of the three elements (from 3-valued domain of the bottom, medium and top) are re-examined to model the propositional formulas of the above form. As regards the postfix modality for actions (which are to be programmed), algebraic and constructive views are necessary into its abstraction: The sequence of actions includes intelligence and the selection is relevant to judgement, such that a concatenation for the sequence, and an alternation for the selection may be essential in state machinery with respect to an algebraic structure "semiring". It suggests that algebraic views on postfix modality in the calculus are to be made clearer.
However, it conceives difficulty in the sense that the associated mapping with the above set (the one containing conjunctions of literals where each conjunction is followed by an implication and a literal) might not be always monotonic and thus fixed point denotation of the associated mapping is not so easily obtained. The usage of van Gelder et al. ([1]) may be effective, not only classically but also for any given expression set of this case that the interpretation of negation is not Boolean. Then actions in postfix modal operator are modelled into the transition system of the multi-modal mu-calculus (as an extension of modal mu-calculus), where the evaluation of algebraic expressions regarding postfix modality may be the key problem.
References:
[1] A. van Gelder, K. A. Ross and J. S. Schlipf, The well-founded semantics for general logic programs, Journal of the ACM, vol.18, 1991, no. 3, pp. 619-649.
[2] M. Hennessy and R. Milner, Algebraic laws for nondeterminism and concurrency, Journal of the ACM, vol. 32, 1985, no.1, pp. 137-161.
[3] S. Yamasaki, State constraint system applicable to adjusting, CLMPS and Logic Book of Abstracts, 2015, pp. 409-410.
ABSTRACT. In this paper we will sketch the basic system of Dialogical Justica-
tion Logic (DJL) (Artemov, 2008) (Artemov & Fitting, 2016) (Studer,
2012), which is a logical framework for reasoning with justied assertions
in dialogues. Following the dialogical perspective, dialogues are disputes
in which two parties argue over a central claim (Keiff, 2011) (Rahman &
Keiff, 2005) (Rahman, McConaughey, Klev, & Clerbout, 2018). We show
multiple and simple examples of dialogues around some thesis, where ex-
plicit justications are given. In particular, we argue that 1) in DJL we
achieve an interesting and clear interpretation of the constant specication
structures of JL; and 2) DJL it is useful to clarify traditional paradoxical
situations (McNamara, 2014). We provide examples for two deontic cases:
Forrester's and Miner's Paradoxes (Forrester, 1984) (Kolodny & MacFar-
lane, 2010) (Klev, 2016). This provides a clear motivation of the logical
derivations of the system and makes it clear why and how we need formal
and material notions in different classes of CS for formal and material
dialogues.
References
Artemov, S. (2008). The logic of justication. The Review of Symbolic Logic,
1 (4), 477{513. doi: 10.1017/S1755020308090060
Artemov, S., & Fitting, M. (2016). Justication logic. In
E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Win-
ter 2016 ed.). Metaphysics Research Lab, Stanford University.
https://plato.stanford.edu/archives/win2016/entries/logic-justification/.
Forrester, J. W. (1984). Gentle murder, or the adverbial samaritan. The Journal
of Philosophy, 81 (4), 193{197.
Keiff, L. (2011). Dialogical logic. In E. N. Zalta (Ed.),
The stanford encyclopedia of philosophy (Summer 2011
ed.). Metaphysics Research Lab, Stanford University.
https://plato.stanford.edu/archives/sum2011/entries/logic-dialogical/.
Klev, A. (2016). A proof-theoretic account of the miners paradox. Theoria,
82 (4), 351{369.
Kolodny, N., & MacFarlane, J. (2010). Ifs and oughts. The Journal of philoso-
phy, 107 (3), 115{143.
McNamara, P. (2014). Deontic logic. In E. N. Zalta
(Ed.), The stanford encyclopedia of philosophy (Winter
2014 ed.). Metaphysics Research Lab, Stanford University.
https://plato.stanford.edu/archives/win2014/entries/logic-deontic/.
Rahman, S., & Keiff, L. (2005). On how to be a dialogician. In Logic, thought
and action (pp. 359{408). Springer.
Rahman, S., McConaughey, Z., Klev, A., & Clerbout, N. (2018). Immanent
reasoning or equality in action: A plaidoyer for the play level (Vol. 18).
Springer.
Studer, T. (2012). Lectures on justication logic. Lecture notes, November, 1 ,
98.
The aim of our symposium is twofold. Firstly, we provide a unified approach to a number of contemporary logico-philosophical results and propose to see them as being about the commitments of various prominent foundational theories. Secondly, we give an overview of formal results obtained over the past few years which shed new light on commitments of both arithmetical theories and theories of sets.
The rough intuition is that commitments of a theory are all the restrictions on the ways the world might be, which are imposed on us given that we accept all the basic principles of the theory. For clarification, during the symposium we focus on the following two types of commitments of a given foundational theory Th:
1. Epistemic commitments are all the statements in the language of Th (or possibly, in the language of Th extended with the truth predicate) that we should accept given that we accept Th.
2. Semantic commitments are all the restrictions on the class of possible interpretations of Th generated by the acceptance of a theory of truth over Th.
In the context of epistemic commitments, several authors have claimed that a proper characterisation of a set of commitments of Th should take the form of an appropriate theory of truth built over Th (see, for example, [Feferman 91], [Ketland 2005] and [Nicolai,Piazza 18]). During the symposium we give an overview of the latest results concerning the Tarski Boundary - the line demarcating the truth theories which generate new implicit commitments of Peano Arithmetic (PA) from the ones which do not. Moreover, we investigate the role of a special kind of reducibility, feasible reducibility, in this context and prove some prominent theories of compositional truth to be feasibly reducible to their base theories.
A different approach to characterize the epistemic commitments of a foundational theory Th was given in [Cieśliński 2017]. Its basic philosophical motivation is to determine the scope of implicit commitments via an epistemic notion of believability. One of the symposium talks will be devoted to presenting this framework.
While investigating the epistemic commitments of Th, we look at the consequences of truth theories in the base truth-free language. Within this approach, a truth theory Th_1 is at least as committing as Th_2 if Th_1 proves all the theorems of Th_2 in the base language. In the semantic approach, one tries to understand every possible condition which truth theories impose on the class of models of Th, instead of looking only at the conditions which are expressible in the base language. A theory Th_1 is at least as semantically committing as Th_2 if for every condition which Th_2 can impose on models of PA, the same condition is imposed already by Th_1. During the symposium we present and compare the latest formal results concerning the semantical commitments of various truth theories extending two of the most distinguished foundational theories: PA and Zermelo-Fraenkel set theory (ZF). During the talks we discuss the philosophical meaning of these developments.
References:
[Cieśliński 2017] The Epistemic Lightness of Truth, Cambridge University Press.
[Feferman 1991] Reflecting on Incompleteness, Journal of Symbolic Logic, 56(1), 1-49.
[Ketland 2005] Deflationism and the Godel Phenomena: reply to Tennant, Mind, 114(453), 75-88.
[Nicolai, Piazza 2018] The Implicit Commitments of Arithmetical Theories and its Semantic Core, Erkenntins.
Commitments of foundational theories: Introduction
ABSTRACT. Our objective is to provide a conceptual analysis of the notion of commitments of a foundational theory, i.e., the commitments of a theory that provides a logical platform for the development of significant portions of mathematics. These commitments are, loosely speaking, all the restrictions on the ways the world might be, which we should accept given that we accept all the basic principles of the theory.
The notion of commitments lies at the core of many contentious debates in contemporary formal philosophy (for example, it has been involved in the debate over deflationism or over the role of classical logic in mathematical reasoning). Here we restrict ourselves to discussing two specific types of commitments:
• Epistemic commitments. Given that we accept the axioms and the inference rules of the theory Th, we should also accept some other statements in the language of Th (or possibly, statements in the language of Th extended with the truth predicate). These additional statements are the epistemic commitments of Th.
• Semantic commitments. Given that we accept the axioms and the deductive machinery of the theory Th, some conclusions about possible interpretations of Th should also be accepted. These conclusions are the semantic commitments of Th.
The most direct examples of epistemic commitments are simply the theorems of Th. However, there is also an interesting category of implicit commitments discussed in the literature. It has been claimed (notably, by Feferman 1991) that given your acceptance of a mathematical theory Th, you should also accept some essentially new sentences, unprovable in Th. A typical example of such a sentence is the consistency statement for Th. Another example is the statement that all theorems of Th are true.
Semantic commitments differ from the epistemic ones in that we do not require that they can be described in the language of Th (even after adding the truth predicate to this language). A typical description of such commitments involves explaining how our specific choice of axioms restricts the class of possible models of Th.
A classical paper on the commitments of axiomatic theories is (Feferman 1991), where a method is described permitting us to generate “larger and larger systems whose acceptability is implicit in acceptance of the starting theory”. In this context, several authors emphasised the role of truth, claiming that a proper characterisation of a set of commitments of Th should take the form of a theory of truth built over Th (cf. (Ketland 2005)). We will propose a different strategy, initiated in (Cieśliński 2017), which explains epistemic commitments in purely epistemic terms. In particular, the non-epistemic notion of truth will not play any essential part in the proposed explanation.
References
1. Cezary Cieśliński. The Epistemic Lightness of Truth. Deflationism and its Logic, Cambridge University Press, 2017.
2. Solomon Feferman. Reflecting on incompleteness. The Journal of Symbolic Logic, 56: 1–49, 1991.
3. Jeffrey Ketland. Deflationism and the Gödel phenomena: reply to Tennant. Mind, 114: 75–88, 2005.
ABSTRACT. The aim of this talk is to present some recently discovered connections between various sets of axioms for the truth predicate for the language of arithmetic and the arithmetical consequences of the resulting theories of truth. We focus exclusively on the truth theories built over Peano Arithmetic (PA). One of the most important theoretical concepts in this study is the Tarski Boundary: it is the "line" demarcating the conservative extensions of PA from the non-conservative ones.\footnote{Let us recall that a theory of truth $\Th$ is a \emph{conservative extension} of PA if its set of arithmetical consequences coincides with those of PA.} By definition, theories of truth below this line are those which generate no new epistemic commitments of PA.
The first part of our talk will be devoted to the results concerning the extensions of the basic compositional theory of truth for PA, CT^-(PA). Axioms of this theory are simply Tarski's inductive truth conditions written in the arithmetical language extended with the truth predicate.\footnote{The theory is also known as \CT\upharpoonright (see [Halbach 2011]). Both the - and \upharpoonright signs are meant to denote the lack of induction axioms for formulae with the truth predicate.} Most importantly we state the Many Faces Theorem that shows that (surprisingly) many natural extensions (including extensions with reflection principles of various kinds as well as extensions with purely compositional principles) of CT^-(PA) turn out to be equivalent to the theory CT_0 - the extension of CT^-(PA) with induction for Delta_0 formulae with the truth predicate (see [Łełyk 2017]). This theory is in fact quite strong: its set of arithmetical consequences can be axiomatized by \omega-many iterations of the uniform reflection scheme over $\PA$, i.e. \bigcup_{n\in\omega} \REF^n(PA), where
In the second part we turn to typed compositional theories of truth which do not prove that the truth predicate commutes with the negation sign. In particular in a model of such a theory there might be sentences which are neither true nor false or both true and false. We show that the Many Faces theorem may fail in this context. More concretely, we show that principles that was equivalent over CT^-(PA), over compositional theories of positive truth, PT^-(PA) and WPT^-(PA)\footnote{The first theory is also known as PT\upharpoonright (see [Halbach 2011]), the second is its variant based on the weak Kleene logic.} give not only different theories, but also theories differing in arithmetical consequences.
Having obtained some fairly natural finite axiomatizations of different arithmetical theories extending PA, we ask the complementary question: which arithmetical theories extending PA can be finitely axiomatized by a natural axiomatic theory of truth? The answer turns out to be particularly simple: we prove that every r.e. subtheory of REF^{\omega}(\PA) can be axiomatized by an extension of CT^-(PA) with a sentence of the form
"Every sentence from \delta is true"
where delta(x) is a Delta_1 formula which defines an axiomatization of PA being proof-theoretically reducible to the standard axiomatization of PA with the induction scheme. We discuss the applications of this result to the discussion on implicit commitments contained in [Dean 2015] and [Nicolai, Piazza 2018].
References:
[Dean 2015] Arithmetical reflection and the provability of soundness, Philosophia Mathematica, 23, 31-64.
[Halbach 2011] Axiomatic theories of truth, Cambridge University Press.
[Łełyk 2017] Axiomatic theories of truth, bounded induction and reflection principles, PhD thesis, available at https://depotuw.ceon.pl/bitstream/handle/item/2266/3501-DR-FF-151176.pdf?sequence=1
[Nicolai, Piazza 2018] The implicit commitment of arithmetical
theories and its semantic core, Erkenntnis.
John Huss (The University of Akron, United States)
Tool-driven science
ABSTRACT. Research on the human microbiome has generated a staggering amount of sequence data, revealing variation in microbial diversity at the community, species, and genomic levels. In order to make this complexity more manageable and easier to interpret, new units—the metagenome, core microbiome, and enterotype—have been introduced in the scientific literature (Arumugam et al. 2011). Here, I argue that analytical tools and exploratory statistical methods, coupled with a translational imperative, are the primary drivers of this new ontology. By reducing the dimensionality of variation in the human microbiome, these new units render it more tractable and easier to interpret, and hence serve an important heuristic role. Nonetheless, there are several reasons to be cautious about these new categories prematurely ‘‘hardening’’ into natural units: a lack of constraints on what can be sequenced metagenomically, freedom of choice in taxonomic level in defining a ‘‘core microbiome,’’ typological framing of some of the concepts, and possible reification of statistical constructs (Huss 2014). Slight differences in seemingly innocuous methodological decisions, such as which distance metric or which taxonomic rank to use, can result in radically different outcomes (Koren et al. 2013). In addition, the existence of tools to study microbiota at the community level through metagenomic and other “holistic” approaches has led to a presumption that causal explanation is also to be found at the community level (O’Malley and Skillings 2018). The general phenomenon is that within microbiome research, and perhaps more generally, the tools of investigation leave an imprint on the resulting knowledge products that can be mistaken as stemming from the features of the natural system under study (Juengst and Huss 2009). As a partial corrective to this, I argue for a return to the robustness analysis of William Wimsatt (1981) in which failures of robustness are used to localize the factors that natural invariances depend upon. A further methodological improvement would be a return to hypothesis-driven research.
Arumugam, M., Raes, J., Pelletier, E., Le Paslier, D., Yamada, T., Mende, D.R., Fernandes, G.R., Tap, J., Bruls, T., Batto, J.M. and Bertalan, M., 2011. Enterotypes of the human gut microbiome. Nature 473(7346): 174.
Huss, J., 2014. Methodology and ontology in microbiome research. Biological theory 9(4): 392-400.
Juengst, E. and Huss, J., 2009. From metagenomics to the metagenome: Conceptual change and the rhetoric of translational genomic research. Genomics, Society and Policy, 5(3): 1-19.
Koren O, Knights D, Gonzalez A et al., 2013. A guide to enterotypes across the human body: meta-analysis of microbial community structures in human microbiome datasets. PLoS Computational Biology 9:e1002863. doi:10.1371/journal.pcbi.1002863
O’Malley, M.A. and Skillings, D.J., 2018. Methodological Strategies in Microbiome Research and their Explanatory Implications. Perspectives on Science 26(2): 239-265.
Wimsatt WC (1981) Robustness, reliability and overdetermination. In: Brewer MB, Collins BE (eds) Scientific inquiry in the social sciences. Jossey-Bass, San Francisco, pp 123–162.
ABSTRACT. According to the traditional views science and technology are definitely different from each other: while the former deals with facts, the latter deals with artifacts. Because of the radical changes in the recent forms of technological and scientific practices the validity of this traditional position has became uncertain, and a new view has emerged - to rethink the science-technology relations using the concept of technoscience.
The term “technoscience” became an essential component of discussions on the science- technology-society complex following the appearance of Latour’s Science in action (Latour 1987). In the last decades number of studies were published on the history, philosophy and sociology of technoscience e.g. by Latour, Ihde, Barnes, Klein, Pickstone, Nordmann, Bensaude-Vincent and others (see the references below).
In this lecture I try to contribute to this discussion introducing the concept of philoscience as an alternative concept to technoscience. While the concept of technoscience expresses the entanglement of the traditional forms of science and technology in a given socio-historical environment, the concept of philoscience expresses the entanglement of science and philosophy in a given socio-historical environment.
On the account proposed in this lecture, all science is technoscience in its any historical forms; there is no science without technological components. On the other hand, at the same time all science is philoscience in its any historical forms as well; there is no science which would not include philosophical components.
When we speak about “science”, unqualified, this inner structure of scientific knowledge remains obscured. It is always a fusion of technological and philosophical components that results in the formation of a “scientific matter”, i.e., a concrete socio-historical form of science. The relative weight of technological and philosophical components in the mixture, and the level of their integration are challenges to be taken up by the history and philosophy of science and technology, and by further studies on the interrelatedness of technology, science, and philosophy.
References
Barnes, B.: Elusive Memories of Technoscience, Perspectives on Science 13, 142–165, 2005
Bensaude-Vincent, B. – Loeve, S. – Nordmann, A. – Schwarz, A.: Matters of Interest: The Objects of Research in Science and Technoscience, Journal of General Philosophy of Science 42, 365–383, 2011
Carrier, M. – Nordmann, A. (eds.): Science in the Context of Application. Boston Studies in the Philosophy of Science 274, Dordrecht: Springer, 2011
Ihde, D. and Selinger, E. (eds.): Chasing Technoscience. Matrix for Materiality. Bloomington & Indianapolis: Indiana U. P. 2003
Klein, U.: Technoscience avant la lettre, Perspectives on Science 13, 226–266, 2005
Latour, B.: Science in Action. How to Follow Scientists and Engineers Through Society. Milton Keyes: Open University Press, 1987
Nordmann, A. – Radder, H. – Schiemann, G. (eds.): Science Transformed? Debating Claims of an Epochal Break. Pittsburgh: University of Pittsburgh Press, 2011
Pickstone, J.: Ways of Knowing. A New History of Science, Technology and Medicine, Chicago: University of Chicago Press, 2001
Weber, J.: Making worlds: epistemological, ontological and political dimensions of technoscience. Poiesis & Praxis 7, 17–36, 2010
Mirko Engler (Humboldt-Universität zu Berlin, Germany)
Generalized Interpretability and Conceptual Reduction of Theories
ABSTRACT. As a general notion in the reduction of formal theories, we investigate d-dimensional relative interpretations. In particular, we ask for a necessary condition under which these interpretations can be called “conceptual reductions”. Throughout the talk, we establish a condition for subdomains of f-definable models of models of interpreting theories. Due to this condition and a theorem of Swierczkowksi(1990), it can be shown that there is no d-dimensional relative interpretation of d-dimensional (d > 1) euclidian geometry into RCF, which satisfies the property of being a conceptual reduction in our sense. A similar result can be obtained with Pillay(1988) for interpretations of ACF0 into RCF.
At first, we give a characterization of d-dimensional relative interpretability in terms of models, which extends results of Montague(1965) and Hájek(1966). This characterization tells us, that for a d-dimensional relative translation f from L[S] to L[T], if f is a d-dimensional relative interpretation of S in T, then the f-definable models of T-models are all S-models. But even more important, if these f-definable T-models form a subset of Mod(S), then f is a d-dimensional relative interpretation of S in T. From here it seems natural to ask which interpretations f we can single out, if we require its f-definable T-models to form a particular subset of Mod(S). Especially, it will be interesting to think of conditions to restrict the class of interpretations to those interpretations, which could be called “conceptual reductions”.
Already by definition, an f-definable T-model A can share objects of its domain with the domain of a T-model B only if T allows B to include d-tuples of elements of A, which is not given in general. But for the question of conceptual change, this seems to be an important feature, since by some predominant semantic intuitions, the meaning of concepts will be determined (in one way or another) by denotations. With this in mind, we suggest a condition for the domain of f-definable models which enables us in a model A to talk about the same objects as in the model from which A was defined.
We are aware that this condition seems to be unusual and deserves more justification, since in first-order model theory we are used to talking about models of theories solely as models in a certain isomorphism-class for well-known reasons. Certainly, for any f-definable model A of B with a different domain than B we can always build an isomorphic copy i(A) of A having a subdomain of B. Nevertheless, we will see that our situation here is slightly different, since we are not considering models of first-order theories in general, but f-definable models of models of first-order theories. This changes the situation to that extend, that - as we prove - an isomorphic copy i(A) of A might not be itself an f-definable model of B and even further; might not be an f-definable model of B in Mod (T) for any f that is an interpretation in T.
Finally, we discuss the consequences of our condition for the famous reduction of euclidian geometry to RCF and related results. We argue, that our formal conclusions are supported by intuitions relevant in mathematical practice.
References:
Hájek, P. (1966) Generalized interpretability in terms of models. Časopis pro pěstování matematiky, 91(3) : 352-357.
Montague, R. (1965). Interpretability in terms of models. Indag. Math. XXVII : 467-476.
Pillay, A. (1988). On groups and fields definable in o-minimal structures. Journal of Pure and Applied Algebra, 53 : 239 - 255.
Swierczkowski, S. (1990). Interpretations of Euclidian Geometry. Transactions of the Amer. Math. Soc., 322(1) : 315 - 328.
15:45
Mateusz Radzki (The Maria Grzegorzewska Pedagogical University, Poland)
The Tarski equipollence of axiom systems
ABSTRACT. In the present paper, we examine the notion of the equipollence of axiom systems provided by Alfred Tarski.
According to Tarski, two axiom systems are equipollent, if ‘each term of the first system can be defined by means of the terms of the second together with terms taken from preceding theories, and vice versa’ [6, p. 122]. Consequently, such axiom systems are interchangeable, i.e., as Tarski writes, one can replace the first axiom system with the second one, and vice versa [6, p. 121].
However, we demonstrate that two axiom systems for the classical propositional calculus which are equipollent in the sense of Tarski do not share the same meta-logical properties, and, consequently, are not interchangeable.
We take into consideration the well-known axiom system for the classical propositional calculus attributed to Jan Łukasiewicz. The Łukasiewicz axiom system (hereinafter AL) contains with the rule of detachment the following axiom schemata: 1. A→(B→A), 2. (A→(B→C))→((A→B)→(A→C)), 3. (~A→~B)→(B→A) (cf. [2, p. 461; 5, p. 6]). It is proved that AL is semantically complete [1, pp. 109-110; 3, pp. 96-116; 4, p. 42].
Then, we examine an axiom system (hereinafter AL’) that is constructed out of negation, implication and disjunction, and which is, by mutual definitions of propositional connectives, equipollent in the sense of Tarski to AL. AL’ contains with the rule of detachment the following axiom-schemata: 1’. A→(~B∨A), 2’. (~A∨~B∨C)→(~(~A∨B)∨~A∨C), 3’. (~~A∨~B)→(~B∨A).
Hence, if AL is complete, and if AL is interchangeable with the introduced AL’, then AL’ is supposed to be complete as well. However, we prove both by means a proof-theoretic and a model-theoretic independence proof (cf. [3, pp. 122-125]) that there is a tautological schema which is independent from {1’, 2’, 3’}. Therefore, we demonstrate that not all formulae of the form of the examined schema are provable in AL.’ Consequently, AL’ is incomplete.
Therefore, either AL – in contrary to the common point of view – is incomplete, or two equipollent in the sense of Tarski axiom systems do not share the same meta-logical properties – the first axiom system is complete, and the second one is incomplete – and consequently, they are not interchangeable.
REFERENCES
[1] A. CHURCH, Introduction to Mathematical Logic, Princeton University Press, Princeton, New Yersey, 1996.
[2] J. L. HEIN, Discrete Structures, Logic and Computability, Jones & Bartlett Learning, Burlington, 2017.
[3] G. HUNTER, Metalogic. An Introduction to the Metatheory of Standard First Order Logic, University of California Press, Berkeley, Los Angeles, 1971.
[4] E. MENDELSON, Introduction to Mathematical Logic, Chapman & Hall, London, 1997.
[5] Y. NIEVERGELT, Logic, Mathematics and Computer Science. Modern Foundations with Practical Applications, Springer, New York, Heidelberg, Dordrecht, London, 2015.
[6] A. TARSKI, Introduction to Logic and the Methodology of Deductive Sciences, Oxford University Press, 1994.
A Notion of Semantic Uniqueness for Logical Constants
ABSTRACT. The demarcation problem for the logical constants is the problem of deciding which expressions of a language to count as part of the logical lexicon, and for what reason. Inferentialist approaches to meaning hold that the inferential behaviour of an expression is meaning-constitutive, and that logical expressions are special in that (among other things) the rules that govern their behaviour uniquely determine their meaning. Things become more complicated when properly semantic (model-theoretic) considerations enter the picture, yet the notion of a consequence relation or a set of rules uniquely determining the meaning of a constant is gaining currency among semantic approaches to the question of logical constanthood as well. In this talk we would like to explore the issues a proponent of a semantic criterion of logicality will encounter when adopting the inferentialist conception of uniqueness, and what a properly semantic notion of uniqueness for logicality could look like.
The notion of uniqueness gained importance in the inferentialist approach to the meaning of the logical constants as it constituted a natural complement to the constraint of conservativity -- ruling out Prior's defective connective tonk (Prior 1960, Belnap 1962), -- cohering with the 'rules-as-definitions' approach pioneered by Gentzen (Gentzen 1934). Identifying the meaning of a logical constant with its inferential role, the demand of uniqueness, in its simplest form, amounted to the requirement that, for constants c, c' obeying identical collections of rules, only synonymous compounds can be formed, i.e. that in a language containing both c and c' we have that
Shifting perspective to a truth-theoretic framework, in which the meaning of a logical constant is given by a model-theoretic object, gives rise to interesting issues in the implementation of this kind of notion of uniqueness. For not only are semantic values underdetemined by (single-conclusion) proof-rules (a set of results collectively termed Carnap's Problem; cf. (Carnap 1943)), it is moreover not immediately clear in what way (UC) is to be 'semanticized'. Different ways of conceiving of such a semantic analogue to (UC) can be found in the literature (cf. (Bonnay & Westerstahl 2016), (Feferman 2015), (Zucker 1978)), but a comprehensive comparison and assessment of their relative merits is still outstanding. This is somewhat surprising given the central role the notion of unique determinability by rules plays in approaches to the nature of the logical constants (cf. (Hacking 1979), (Hodes 2004), (Peacocke 1987). This talk aims to investigate and compare some of the different ways in which (UC) could be adapted to a model-theoretic framework, and assesses the adequacy of these different implementations for the demarcation problem of the logical constants.
Belnap, N., "Tonk, Plonk and Plink." Analysis 22 (1962): 130-134.
Bonnay, D. and D. Westerstahl. "Compositionality Solves Carnap's Problem." Erkenntnis 81 (2016): 721-739.
Carnap, R., Formalization of Logic. Harvard University Press, 1943.
Feferman, S. "Which Quantiers Are Logical? A Combined Semantical and Inferential Criterion." Quantifiers, Quantifiers, and Quantifierers: Themes in Logic, Metaphysics and Language. Ed. Alessandro Torza. Springer, 2015. 19-31.
Hacking, I., "What is Logic?." Journal of Philosophy 76 (1979): 285-319.
Hodes, H.T., "On The Sense and Reference of A Logical Constant." Philosophical Quarterly 54 (2004): 134-165.
Humberstone, L., The Connectives. MIT Press, 2011.
Peacocke, C., "Understanding Logical Constants: A Realist's Account." Studies in the Philosophy of Logic and Knowledge. Ed. T. J. Smiley and T. Baldwin. Oxford University Press, 2004. 163-209.
Prior, A.N., "The Runabout Inference-Ticket." Analysis 21 (1960): 38-39.
Zucker, J.I., "The Adequacy Problem for Classical Logic." Journal of Philosophical Logic 7 (1978): 517-535.
ABSTRACT. In this paper we draw a proposal to develop a logic of fictions in the game-theoretical approach of dialogical pragmatism. From one of the main criticisms that point to classical logic: the structural schizophrenia of its semantics (Lambert, 2004: 142-143; 160), we review the ontological commitments of the two main traditions of logic (Aristotle and Frege) to highlight their limits concerning the analysis of fictional discourse, and the overcoming from a pragmatic game perspective.
In specialized literature, we can often find objections against the presumed explanatory power of logic and formal languages in relation to fictional discourse. Generally, the aim of such critics is the rol that the notion of reference fulfills in the logic analysis of fiction; according to our view, the most ideal approach would be a more pragmatic account. We respond to this objection affirming that, if we elaborate a context of adequate analysis, a properly pragmatic treatment of fiction in logic is possible without the restrictions imposed by the notion of reference. Dialogic logic, which considers the arguments as a interactive chain of questions and answers, offers an ideal analysis context for a better pragmatic approach.
In this sense we believe in the richness of the perspective of the dialogic logic that comprises existence through the interactional concept of choice, whose semantics is based on the concept of use and can be called pragmatic semantics.
References:
Aristóteles (1982), “Categorías”, Tratados de lógica (Órganon) I. Madrid: Gredos. Aristóteles (1995), “Sobre la interpretación”, Tratados de lógica (Órganon) II.
Madrid: Gredos.
Frege, Gottlob (1884), Die Grundlagen der Arithmetik. Eine logisch mathematische Untersuchung über den Begriff der Zahl. Breslau: Verlag von Wilhelm Koebner.
Frege, Gottlob (1948), “Sense and Reference”, The Philosophical Review, Vol. 57, No. 3 (May, 1948); pp. 209-230.
Frege, Gottlob (1983), Nachgelassene Schriften, edición de H. Hermes, F. Kambartel y F. Kaulbach. Hamburg: Felix Meiner Verlag Hamburg.
Frege, Gottlob (1879), Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens, Halle a. S.: Louis Nebert.
Lambert, Karel (1960), “The Definition of E! in Free Logic”, Abstracts: The International Congress for Logic, Methodology and Philosophy of Science. Stanford: Stanford University Press.
Lambert, Karel (2004), Free Logic. Selected Essays. Cambridge: Cambridge University Press.
Lambert, Karel (1997), Free Logics: Their Foundations, Character, and Some Applications Thereof. ProPhil: Projekte zur Philosophie, Bd. 1. Sankt Augustin, Germany: Academia.
Priest, Graham (2005), Towards Non-Being. The logic and Metaphysics of Intentionality. Oxford: Clarendon Press.
Rahman, S. (2001), “On Frege’s Nightmare. A Combination of Intuitionistic, Free and Paraconsistent Logics”, en H. Wansing, ed., Essays on Non-Classical Logic, River Edge, New Jersey: World Scientific; pp. 61-85.
Rahman, S. & Fontaine, M. (2010), “Fiction, Creation and Fictionality An Overview”, Revue Methodos (CNRS, UMR 8163, STL). A paraître.
Rahman, S. & Keiff, L. (2004), “On how to be a dialogician”, en D. Vanderveken, ed., Logic, Thought and Action, Dordrecht: Springer; pp. 359-408.
Reicher, M. (2014), “Nonexistent Objects”, Stanford Encyclopedia of Philosophy: https://plato.stanford.edu/entries/nonexistent-objects/. Consulta: diciembre 2016.
Semantic interoperability: The oldest challenge and newest frontier of Big Data
ABSTRACT. A key task for contemporary data science is to develop classification systems through which diverse types of Big Data can be aligned to provide common ground for data mining and discovery. These systems determine how data are mined and incorporated into machine learning algorithms; which claims – and about what – data are taken as evidence for; whose knowledge is legitimised or excluded by data infrastructures and related algorithms; and whose perspective is incorporated within data-driven knowledge systems. They thus inform three key aspects of data science: the choice of expertise and domains regarded as relevant to shaping data mining procedures and their results; the development and technical specifications of data infrastructures, including what is viewed as essential knowledge base for data mining; and the governance of data dissemination and re-use through such infrastructures. The challenge of creating semantically interoperable data systems is well-known has long plagued the biological, biomedical, social and environmental sciences, where the methods and vocabulary used to classify data are often finely tailored to the target systems at hand and thus tend to vary across groups working on different organisms and ecosystems. A well-established approach to this challenge is to identify and develop one centralised system, which may serve as a common standard regardless of the specific type of data, mining tools, learning algorithms, research goals and target systems in question. However this has repeatedly proved problematic for two main reasons: (1) agreement on widely applicable standards unavoidably involves loss of system-specific information that often turns out to be of crucial importance to data interpretation; and (2) the variety of stakeholders, data sources and locations at play inevitably results in a proliferation of classification systems and increasing tensions among different interest groups around what system to adopt and impose on others. Taking these lessons into account, this paper takes some steps towards developing a conceptual framework through which different data types and related infrastructures can be linked globally and reliably for a variety of purposes, while at the same time preserving as much as possible the domain- and system-specific properties of the data and related metadata. This enterprise is a test case for the scientific benefits of epistemic pluralism, as advocated by philosophers such as John Dupré, Hasok Chang, Ken Waters and Helen Longino. I argue that “intelligent data linkage” consists of finding ways to mine diverse perspectives and methods of inquiry, rather than to overcome and control such diversity.
ABSTRACT. In this talk, I am going to discuss differences between ways in which data for Big Data Analysis is gathered in the context of business and Life Sciences context especially in the medical biology projects. Since both size and complexity of experimental projects in life sciences is varied, I would like to focus on big interdisciplinary projects that usually combine different testing methods.
In business the process usually starts with collecting as much information as possible. Only then people try to determine what can be inferred from the data, forming assumptions upon which the subsequent analysis is carried out. In Life Sciences operating model is different: it starts with planning what information a scientist needs to collect in order to get the answer for the scientific question. Moreover, scientists usually have limited budget for their broad experimental projects and collecting of each and every information cost. For that reason the scope of collected information, as well as type and size of the study group, should be carefully planned and described. Furthermore, in medical sciences the cooperation between a number of medical and scientific units is crucial. Because of that one often has to deal with data collected by different teams, using various methods and different storage formats (not all of them being digital). Thus, data in life sciences is not only big, varied and valuable, but also tends to occupy large space in laboratories and archives.
Only recently scientists got at their disposal high-throughput genomic technologies that enable the analysis of whole genomes or transcriptomes originating from multiple samples. Now they are able to correlate these data with phenotypic data such as biochemical marks, imaging, medical histories etc.
Some of the challenges in that endeavor are: choosing the best measurement methods that can be used by different people or teams and collecting the most reliable data. Later there comes a problem of digitizing the results of measurements and combining them with the other data. Furthermore genomic experiment tend to result in creating huge files of raw data that need to be analyzed using specific algorithms. It is not obvious what should be done with those raw data after the analysis. Should they be saved, because there is a chance for a better analyzing algorithm in the future? Should they be deleted, to make room for future data? Should they be shared in some commonly accessible databases?
Life Science is developing rapidly, bringing about spectacular discoveries. Yet scientists are often afraid of Big Data, even though they deal with it very often. In my opinion there is a need for discussion resulting in development of guidelines and standards for collecting diverse types of scientific data, combining and analyzing them in a way that maximizes the reliability of results.
17:45
Jens Ulrik Hansen (Department of People and Technology, Roskilde University, Denmark, Denmark)
Philosophizing on Big Data, Data Science, and AI
ABSTRACT. To whom do the concept of “Big Data” belongs? Is Big Data a scientific discipline, does it refer to datasets of a certain size? Or does Big Data best refer to a collection of information technologies, or is it a revolution within modern businesses? Certainly, “Big Data” is a buzzword used by many different people, businesses, and organizations to mean many different things.
Similar considerations can be made about the concepts of “Data Science” and “AI”. Within academia, Data Science has, on several occasions, been used to refer to a “new” science that mix statistics and computer science. Another use of the term is as what “Data Scientists” (mainly in industry) are doing. (Which might differ.) Likewise, the term “AI” has been used to refer to the study of Artificial Intelligence as a scientific discipline. However, AI is also the new buzzword within industry. Though, here AI might better be translated to “Automated Intelligence”. Within industry, AI is essentially the same as what “Big Data” used to refer to, however, the focus has moved towards how models can be embedded in applications that automatically makes decisions instead of just a focus on deriving insights from data.
Why are the different usages of these concepts relevant? On the one hand, if we want our science and philosophy to matter and have relevance beyond academia, it does matter how the concepts are used outside academia in the mainstream public and business world. On the other hand, there is, a much stronger sense in which the usage and different meanings of the concepts matter. It mattes to our philosophy. For instance, if we want “Philosophy of Big Data” to be about the ethics of automatic profiling and fraud detection used in welfare, health and insurance decisions, the dataset sizes and information technologies used do not really matter. Instead, it is how data about individuals is collected and shared, how biases in data transfer to biases in machine learning model predictions, how predictive models are embedded in services and application, and how these technologies are implemented in private and public organizations. Furthermore, if by “Philosophy of Big Data” we are interested in the epistemological consequences of Big Data, it is again other aspects that are central.
In this talk I will therefore argue for the abandonment of usage of terms like “Philosophy of Big Data”, “Philosophy of Data Science”, “Philosophy of AI”, etc. Instead I suggest that we, as philosophers paint a much more nuanced picture of a wide family of related concepts and technologies related to Big Data, Data Science, AI and their cousins such as “Cognitive Computing”, “Robotics”, “Digitalization”, and “IoT”.
Ideals, idealization, and a hybrid concept of entailment relation
ABSTRACT. The inescapable necessity of higher-type ideal objects which more often than not are “brought into being” by one of the infamously elegant combinations of classical logic and maximality (granted by principles as the ones going back to Kuratowski and Zorn), is, it may justly be argued, a self-fulfilling prophecy. Present-day classical mathematics thus finds itself at times clouded by strong ontological commitments. But what is at stake here is pretense, and techniques as multifarious as the ideal objects they are meant to eliminate have long borne witness to the fact that unveiling computational content is all but a futile endeavor.
Abstract entailment relations have come to play an important role, most notably the ones introduced by Scott [6] which subsequently have been brought into action in commutative algebra and lattice theory by Cederquist and Coquand [3]. The utter versatility of entailment relations notwithstanding, some potential applications, e.g., with regard to injectivity criteria like Baer’s, seem to call for yet another concept that allows for arbitrary sets of succedents (rather than the usual finite ones), but maintains the conventional concept’s simplicity.
In this talk, we discuss a possible development according to which an entailment relation be un- derstood (within Aczel’s constructive set theory) as a class relation between finite and arbitrary subsets of the underlying set, the governing rules for which, e.g., transitivity, to be suitably adjusted. At the heart of our approach we find van den Berg’s finitary non-deterministic inductive definitions [2], on top of which we consider inference steps so as to give account of the inductive generation procedure and cut elimination [5]. Carrying over the strategy of Coquand and Zhang [4] to our setting, we associate set-generated frames [1] to inductively generated entailment relations, and relate completeness of the latter with the former’s having enough points.
Once the foundational issues have been cleared, it remains to give evidence why all this might be a road worth taking in the first place, and we will do so by sketching several case studies, thereby revisiting the “extension-as-conservation” maxim, which in the past successfully guided the quest for constructivization in order theory, point-free topology, and algebra.
The intended practical purpose will at least be twofold: infinitary entailment relations might com- plement the approach taken in dynamical algebra, and, sharing aims, may ultimately contribute to the revised Hilbert programme in abstract algebra.
[1] P. Aczel. Aspects of general topology in constructive set theory. Ann. Pure Appl. Logic, 137(1–3):3–29, 2006.
[2] B. van den Berg. Non-deterministic inductive definitions. Arch. Math. Logic, 52(1–2):113–135, 2013.
[3] J. Cederquist and T. Coquand. Entailment relations and distributive lattices. In S. R. Buss, P. Hájek, and P. Pudlák, editors, Logic Colloquium ’98. Proceedings of the Annual European Summer Meeting of the Association for Symbolic Logic, Prague, Czech Republic, August 9–15, 1998, volume 13 of Lect. Notes Logic, pp. 127–139. A. K. Peters, Natick, MA, 2000.
[4] T. Coquand and G.-Q. Zhang. Sequents, frames, and completeness. In P. G. Clote and H. Schwichtenberg, editors, Computer Science Logic (Fischbachau, 2000), volume 1862 of Lecture Notes in Comput. Sci., pp. 277–291. Springer, Berlin, 2000.
[5] D. Rinaldi and D. Wessel. Cut elimination for entailment relations. Arch. Math. Logic, in press.
[6] D. Scott. Completeness and axiomatizability in many-valued logic. In L. Henkin, J. Addison, C.C. Chang, W. Craig, D. Scott, and R. Vaught, editors, Proceedings of the Tarski Symposium (Proc. Sympos. Pure Math., Vol. XXV, Univ. California, Berkeley, Calif., 1971), pp. 411–435. Amer. Math. Soc., Providence, RI, 1974.
ABSTRACT. Alongside the analogy between maximal ideals and complete theories, the Jacobson radical carries over from ideals of commutative rings to theories of propositional calculi. This prompts a variant of Lindenbaum's Lemma that relates classical validity and intuitionistic provability, and the syntactical counterpart of which is Glivenko's Theorem. Apart from perhaps shedding some more light on intermediate logics, this eventually prompts a non-trivial interpretation in logic of Rinaldi, Schuster and Wessel's conservation criterion for Scott-style entailment relations (BSL 2017 & Indag. Math. 2018).
Michèle Friend (George Washington University, United States)
Disturbing Truth
ABSTRACT. The title is deliberately ambiguous. I wish to disturb, what I take to be, common conceptions about the role of truth held by philosophers of science and taxonomically inclined scientists. ‘Common’ is meant in the statistical sense the majority of institutionally recognised philosophers of science and scientists.
The common view is that the various pronouncements of science are usually true, and a few are false. We eventually discover falsehoods and expel them from science. The purpose of scientific research is to discover new truths. The success of science consists in discovering truths because it is through them that we are able to make true predictions and, through technology, control and manipulate, nature. Fundamental science consists in finding fundamental truths in the form of laws. These are important because they allow us to derive and calculate further truths. With this view, there are obvious problems with the use of defective information in science. We can address these problems in a piecemeal way. And the fact that defective information does not prevent scientists from carrying out their scientific enquiries suggests the piecemeal approach.
I shall present a highly idiosyncratic view which by-passes the problems of defective knowledge. The view is informed by what we might call “uncommon” scientists. According to the uncommon view: all "information" (formulas or purported truths of a theory), in science is false or defective. The purpose of their research is to understand the phenomena of science. Provisionally, we can think of ‘understanding’ as truth at a higher level than the level of propositions of a theory. The success of science is measured by opening new questions, and finding new answers, sometimes in the form of deliberately incorrect or counter-scientific information. Fundamental science does not stop with laws, but digs deeper into the logic, mathematics, metaphysics, epistemology or language of the science.
ABSTRACT. Quasi-truth is a mathematical approach to the concept of truth from a pragmatic perspective. It is said that quasi-truth is a notion of truth that is more suitable to current science as actually practiced; intuitively, it attempts to provide a mathematically rigorous approach to a pragmatic aspect of truth in science, where there is not always complete information about a domain of research, and where it is not always clear that we operate with consistent information (see da Costa and French 2003 for the classical defense of those claims). In a nutshell, the formalism is similar to the well-known model theoretic notion of truth, where truth is defined for elementary languages in set theoretic structures. In the case of quasi-truth, the notion is defined in partial structures, that is, set theoretic structures whose relations are partial relations. Partial relations, on their turn, are relations that are not defined for every n-tuple of objects of the domain of investigation. Sentences are quasi-true iff there is an extension of those partial relations to complete relations so that we find ourselves dealing again with typical Tarskian structures (and also with Tarskian truth; see Coniglio and Silvestrini (2014) for an alternative approach that dispenses with extensions, but not with partial relations). In this talk, we shall first present some criticism to the philosophical expectations that were put on quasi-truth and argue that the notion does not deal as expected with defective situations in science: it fails to accommodate both incomplete information as well as inconsistent ones. Indeed, it mixes the two kinds of situations in a single approach, so that one ends up not distinguishing properly between cases where there is lack of information and cases where there is conflicting information. Secondly, we advance a more pragmatic interpretation of the formalism that suits better with it. In our interpretation, however, there are no longer contradictions and no need to hold that the information is incomplete. Quasi-truth becomes much less revolutionary, but much more pragmatic.
References
Coniglio, M., and Silvestrini, L. H. (2014) An alternative approach for quasi-truth. Logic Journal of the IGPL 22(2) pp. 387-410.
da Costa, N. C. A. and French, S., (2003) Science and Partial Truth: A Unitary Approach to Models and Scientific Reasoning. Oxford: Oxford University Press.
ABSTRACT. While the presence of defective (inconsistent, conflicting, partial, ambiguous and vague) information in science tends to be naturally seen as part of the dynamics of scientific development, it is a fact that the greater the amount of defective information that scientists have to deal with, the less justified they are in trusting such information. Nowadays scientific practice tends to use datasets whose size is beyond the ability of typical database software tools to capture, analyze, store, and manage (Manyika et al., 2011). Although much current scientific practice makes use of big data and scientists have struggled to explain precisely how do big data and machine learning algorithms actually work, they still rationally trust some significant chunks that these datasets contain. The main question we address is: In the era of big data, how can we make sense of the continued trust placed by scientists in defective information in the sciences consistently with ascribing rationality to them?
In order to respond to this question, we focus on the particular case of astrophysics as an exemplar of the use of defective information. In astrophysics, information of different types (such as images, redshifts, time series data, and simulation data, among others) is received in real-time in order to be captured, cleaned, transferred, stored and analyzed (Garofalo et al. 2016). The variety of the sources and the formats in which such information is received causes the problem of how to compute it efficiently as well as the problem of high dimensional data visualization, that is, how to integrate data that have hundreds of different relevant features. Since such datasets tend to increase in volume, velocity and variety (Garofalo et al. 2016), that makes it even harder to achieve any deep and exhaustive understanding of what they contain. However, this has not prevented astronomers from trusting important chunks of the information contained in such datasets.
We defend that such trust is not irrational. First, we argue that, as astrophysics is an empirical science, empirical adequacy of astronomical chunks of information plays an important role in their rational acceptance. Second, we contend that, despite their defectiveness, the chunks of information that astronomers trust are empirically adequate. In order to defend this, we appeal to a particular formulation of empirical adequacy (first introduced in Bueno, 1997) that relies on resources of the partial structures framework to accommodate inconsistent, partial, ambiguous and vague information in the current scientific practice of astrophysics.
References
Bueno, O. (1997): “Empirical Adequacy: A Partial Structures Approach”, Studies in History and Philosophy of Science 28, pp. 585-610.
Garofalo, M., A. Botta and G. Ventre (2016): “Astrophysics and Big Data: Challenges, Methods, and Tools”, Astroinformatics (AstroInfo16), Proceedings IAU Symposium No. 325, pp. 1-4.
Manyika, J., Chui, M., Brown, B., et al. (2011): Big Data: The Next Frontier for Innovation, Competition, and Productivity. McKinsey Global Institute.
ABSTRACT. Many researchers claim that the role of argumentation is central in mathematics. Mathematicians do much more, than simply prove theorems. Most of their proving activity might be understood as kinds of argumentation. Lakatos’ Proofs and Refutations is an enduring classic that highlights the role of dialogue between agents (a teacher and some students) by attempts at proofs and critiques of these attempts. The comparison between argumentation supporting an assumption or a purported proof and its proof is based on the case that proof can be regarded as a specific argumentation in mathematics.
Thus, argumentation theory can be used to explore certain aspects of development of discovery proof-events in time. The concept of proof-event was introduced by Joseph Goguen, [2001], who understood mathematical proof, not as a purely syntactic object, but as a social event, that takes place in specific place and time and involves agents or communities of agents. Proof-events are sufficiently general concepts that can be used to study besides the “traditional” formal proofs, or other proving activities, such as incomplete proofs, purported proofs or attempts to verify a conjecture.
Since argumentation is inseparable from the process of searching for a mathematical proof, we suggest a modified model of the proof-events calculus [Stefaneas and Vandoulakis 2015] that was used to represent discovery proof-events and their sequences, based on the versions of argumentation theories advanced by Pollock [1992], Toulmin [1993] and Kakas and Loizos [2016].
We claim that the exchange of arguments and counterarguments aimed at clarifying possible gaps or implicit assumptions that occur during a proof, can be formally represented within this integrated framework. We illustrate our approach on the historical case of the sequence of proof-events leading to the proof of Fermat’s Last Theorem.
References
Goguen, Joseph, (2001), “What is a proof”, http://cseweb.ucsd.edu/~goguen/papers/proof.html.
Kakas Antonis, Loizos Michael, (2016), “Cognitive Systems: Argument and Cognition”. IEEE Intelligent Informatics Bulletin, 17(1): 15-16.
Pollock J. L. (1992), “How to reason defeasibly”. Artif. Intell., 57(1):1–42.
Stefaneas, P. & Vandoulakis, I. (2015), “On Mathematical Proving”. Computational Creativity, Concept Invention, and General Intelligence Issue. Journal of General AI, 6(1): 130-149.
Toulmin S. E. (1993). The use of arguments. Cambridge, Cambridge University Press.
17:15
Peter Vojtas (Charles University Prague, Czechia) Michal Vojtas (Salesian Pontifical University, Italy)
Problem Reduction as a general epistemic reasoning method
ABSTRACT. We introduce a general (epistemic) reasoning method based on problem reduction and show its use and discuss its justifiability in several disciplines.
To introduce our concept we rephrase and extend the question-answer approach of A. Blass [B]. We consider a problem (problem domain) P = (I, S, A) consisting of set of problem instances I, set of potential solutions S and acceptability relation A, where A(i,s) means that a particular solution s in S is acceptable for a problem instance i in I.
Problem reduction occurs e.g. when a problem solution is delegated to an expert, or a client computation asks a server to do a part of a job, or an agent with limited resources asks agent with richer knowledge.
Assume we have two problem domains P1 = (I1, S1, A1) (sometimes called target domain) and P2 = (I2, S2, A2) (sometimes called source domain). We consider a typical scenario: assume we are not able (or it is very complex or very expensive) to solve problem instances from I1. Moreover, assume there is a problem domain P2 where we have methods to find acceptable solutions (efficient, cheaper). If we happen to efficiently reduce problem instances from I1 to problem instances of I2 in such a way that acceptable solutions in S2 can be transformed to acceptable solutions to original problem instance, we are done. There is a wide space for what acceptability can mean. It can be e.g. correct, reasonable, reliable, etc.
Problem reduction (PR). Reduction of a problem P1 to a problem P2 consists of a pair of mappings (r, t): r maps problem instances i1 in I1 of P1 to problem instances r(i1) in I2 of P2 and t maps solutions s2 in S2 to solutions t(s2) in S1 in such a way, that an acceptable (in the sense of relation A2) solution s2 to instance r(i1) is transferred to an solution t(s2) which is A1-acceptable to original problem instance i1. Formally we require: for all i1 in I1 and s2 in S2
A2(r(i1), s2) implies A1 (i1, t(s2)) holds true. (PRi)
Motivated by [Hr] we combine decision and search problems, and assume every set of solutions contains an extra element “nas” = “no_acceptable_solution” and we require the above implication should be valid also for r2 = nas2 and t(nas2) = nas1. This helps us to avoid empty fulfillment of (PRi) implication and to preserve category theoretical character of problem reductions.
Our approach generalizes analogical reasoning [A], in that we show that along of similarity it works also under some quite complementary situations
Following [He] we can read following question answer:
SPIEGEL: And what now takes the place of Philosophy?
Heidegger: Cybernetics.
We will show that our general (epistemic) reasoning method based on problem reduction can be used to understanding cybernetics as modeling of dynamic processes with feedback. This can shed light to Heidegger's answer.
Another application comes from modeling and abstraction in System Sciences. Inspired by Peter Senge [S], we propose the correlation of the organizational analysis’ depth with the different type of possible actions (solutions). First reducing problem instances from events analysis to patterns of behavior analysis we reach finally systemic structure analysis (on problem instances side). Finding an acceptable generative action and transforming back along the solution side to responsive action and finally to reactive action closes the use of our back and forth reduction and translation.
We consider also use in the management by objectives model in organizational envisioning and narration. These were justified empirically in a certain degree of acceptance of solution. Our reasoning works also under uncertainty.
Problem reduction itself, as a reasoning method, can be quite challenging (similarly to finding mathematical proofs). Nevertheless we believe that advantages of finding P2, r, t and proving implication (PRi) for solving P1 are worth of these difficulties.
[A] Paul Bartha, "Analogy and Analogical Reasoning", The Stanford Encyclopedia of Philosophy (winter 2016 Edition), Edward Zalta (ed.)
[B] Andreas Blass. Questions and Answers - A Category Arising in Linear Logic, Complexity Theory, and Set Theory. Advances in Linear Logic, eds. J.-Y. Girard etal. London Math. Soc. Lecture Notes 222(1995)61-81
[He] Martin Heidegger - The 1966 interview published in 1976 after Heidegger's death as "Only a God Can Save Us". Translated by William J. Richardson. Der Spiegel. 1976-05-31. pp. 193–219.
[Hr] Juraj Hromkovic. Why the Concept of Computational Complexity is Hard for Verifiable Mathematics. Electronic Colloquium on Computational Complexity, TR15-159
[S] Peter Michael Senge. The Fifth Discipline. Doubleday, New York 1990
Tableaux procedures for logics of consequential implication
ABSTRACT. Logics of Consequential Implication belong to the family of connexive logics, but Consequential Implication (CI) differ from standard connexive implication for some remarkable properties :
1) The Factor Law A CI B --> (A & C CI B & C), which is commonly accepted in connexive logic, does not hold or holds in weakened form
2) Boethius' Thesis belongs to the set of valid formulas in the weak form
(A CI B) --> ¬(A CI ¬ B) but not in the strong form (A CI B) CI ¬(A CI ¬ B)
3) A clear distinction is drawn between analytic and synthetic (i.e. context-depending) conditionals, which is normally neglected in connexive logic
4) The operators for analytical consequential implication are intertranslatable with operators of monadic modal logic.
The point 4) makes it clear that, if a system X of analitical consequential implication is one-one translatable into a normal modal system X*, X is decidable iff X* is decidable. If X* is tableaux - decidable, X is also tableaux decidable thanks to the mentioned translation. A noteworthy fact is that monadic modal operators may be translatable into two or more operators of consequential implication. The two main definitions which may be provided are the following:
1) A --> B =df [](A -> B) & ([]B -> []A) & (<>B -> <>A)
2) A => B = df [](A -> B) & (<>B -> <>A)
Consequently, the decision procedure for -->-systems and => - systems are divergent in the reduction of the wffs to modal language, even if they may be coincident for the tableaux technique. The more interesting problem concerns the tableaux procedure for systems of syntethic consequential implication (a special case of it being counterfactual implication). Such implication may be defined in terms of the analytic one in various different ways, e.g.:
A > B =df *A => B
A >> B =df *A => B & ([]B ->[]A)
The operator * for "ceteris paribus" is submitted to independent axioms and is in turn translatable into a simpler operator in this way:
*A =df w(A) & A ,
where w is a modal operator submitted to the simple axiom
(w<>) <>A -> <>(w(A)& A)
The paper outlines the axiomatization of systems for analytic and synthetic CI-implication and shows how to define a tableaux procedure suitable to wffs containing > and >>.
The final result shows how the tableaux procedure for the axiomatic system of CI-implication can be converted into a completeness proof for it.
Understanding the chemical element: a reflection on the importance of the periodic table
ABSTRACT. This year we celebrate the 150th anniversary of the periodic table. This
classification plays an important part in my PhD research, which studies the
evolution of the concept of element during the century leading up to its
publication. Seeing as our current understanding of the chemical element is
often traced back to Mendeleev's work, my goal is to understand how his view
of this concept could have been influenced by his peers and predecessors.
The periodic table links my research to its present-day interests: the
tremendous success of this classification accentuates the need for an
understanding of the nature of that which is classified in it. When doing
historical research, I use today's periodic table as a tool to understand the
chemists' observations and reasoning from a modern chemical point of view.
Often, I am astonished by how much of the knowledge presented in the table
was already available in early 19th-century chemistry, and this has sparked
a personal reflection on the collective nature of scientific knowledge.
Rather than as a break with his predecessors, Mendeleev's table could be seen
as the matured product of a period of collaborative research on the
elements. In my opinion, the collective nature of this discovery only gives
us more reason to celebrate it.
ABSTRACT. As a philosopher, I am especially interested in the role of values in science. By values, I broadly mean features that people find good, useful, or desirable. Although values are more familiar to us from political and moral discussions, I suggest that values are also very much present in science, too. For example, scientists might find something valuable in the design of some experiment; in the goals of research; or they might value some features of theories so much that they act as reason for choosing between competing theories. For someone interested in values, the priority dispute on the discovery of the periodic system gives especially fertile working grounds. If we compare the tables of J.A.R. Newlands, Julius Lothar Meyer, and Dmitrii Mendeleev, they appear simultaneously similar and different. In my research, I argue that at least some of these differences are explained by chemists emphasising different values during the development of their periodic systems. This raises a number of questions: did the values influence the differing reception of the systems? Did they guide the further uses of the systems? In short, investigating the periodic systems through the lens of values allows us a telling snapshot of the many ways in which values may guide science.
Parity claims in biology and a dilemma for informational parity
ABSTRACT. The causal parity thesis states that there is a “fundamental symmetry” between genetic and non-genetic factors: “There is much to be said about the different roles of particular resources. But there is nothing that divides the resources into two fundamental kinds. The role of genes is no more unique than the role of many other factors.” (Griffiths & Gray 1994: 277). Arguments for causal parity are typically intertwined with considerations regarding the received view of genes according to which they are information carriers, and are thus connected to what I will call “informational parity”: “in any sense in which genes carry developmental information, nongenetic developmental factors carry information too.” (Sterelny & Griffiths 1999: 101). The idea of an informational parity between genes and non-genetic factors is brought up in the literature in two related but distinct contexts:
1) In addressing the informational talk in biology: as a consequence of appealing to information theory, as opposed to some other approach to information.
2) In supporting the causal parity thesis: as a case, specification, or instantiation of the causal parity thesis.
While currently literature revolves around defending and refuting the causal parity thesis (e.g. Weber forthcoming), the aim of this paper is to examine informational parity in particular. I scrutinize the sort of reasoning leading to informational parity in both cases (1) and (2), and argue that it is grounded on a conflation between information and causation, in the sense that the two concepts are assumed to be undisputedly related. This underlying conflation has a twofold origin: (a) an implicit Millean view of causation, and (b) a misreading of information theory.
In relation to (a), I argue that if more fine-grained causal analysis can be used to refute the causal parity thesis, then this drags down any idea of informational parity that relies on causal considerations. Regarding (b), I argue that information theory does not entail any causal notion of information per se. The upshot is that no talk of informational parity as a “case” of causal parity is sufficiently motivated to the date, and parity following from a non-causal reading of information theory is uninteresting, on the other.
Informational parity, then, faces a serious dilemma:
• Providing a satisfying defense of informational parity on causal grounds (i.e., based on the endorsement of the causal parity thesis) requires first determining what causal contribution is shared by DNA and other factors (where causation is understood in more fine-grained terms), and then pin down in what sense does this have anything to do with information (and what it is meant by this polysemantic term). There is no obvious way to achieve this.
• Arguing for informational parity on non-causal grounds, in turn, is problematic for two reasons. First, because some of the available non-information-theoretic accounts of information in biology do rely on causal considerations, which brings us back to the problem above. Secondly (and more importantly), because many of them are precisely shaped to reject parity claims, which not only defies informational parity directly, but furthermore opens up the issue of ad hoc conceptualizations.
Griffiths, P. & Gray, R. (1994). “Developmental Systems and Evolutionary Explanation”. Journal of Philosophy, 91, pp. 277-304.
Sterelny, K. & Griffiths, P. (1999). Sex and death. An introduction to philosophy of biology. Chicago: The University of Chicago Press.
Weber, M. (forthcoming). “Causal Selection versus Causal Parity in Biology: Relevant Counterfactuals and Biologically Normal Interventions. In: K. Waters, M. Travisano and J. Woodward (eds.), Philosophical Perspectives on Causal Reasoning in Biology. Minneapolis: University of Minnesota Press.
Feasible reducibility and interpretability of truth theories
ABSTRACT. Let P[B] be an axiomatic theory of truth over a base theory B, e.g., P can be some flavor of typed or untyped truth theory, and B can be PA (Peano arithmetic) or ZF (Zermelo-Fraenkel theory of sets). P[B] is said to be *feasibly reducible* to B if there is a polynomial time computable function f such that: if p is a P[B]-proof of a sentence b formulated in the language of B, then f(p) is B-proof of b. This notion is known to be stronger than the conservativity of P[B] over B for finitely axiomatizable P[B]. A closely related notion is that of *feasible interpretability* of P[B] in B, which corresponds to the existence of a polynomial time deterministic Turing machine that verifies (via an appropriate translation) that P[B] is interpretable in B.
In this talk I will overview recent joint work with Mateusz Łełyk and Bartosz Wcisło on feasible reducibility of P[PA] to PA, and feasible interpretability of P[PA] in PA, for three canonical truth theories: P = CT^- (Compositional truth without extra induction), FS^- (Friedman-Sheard truth without extra induction), and KF^- (Kripke-Feferman truth without extra induction).
Our results show that not only the "logical distance'', but also the "computational distance'' between PA and the aforementioned canonical truth theories is as short as one could hope for, thereby demonstrating that, despite their considerable expressive power, these truth theories turn out to be rather light additions to PA.
17:15
Bartosz Wcisło (Institute of Mathematics, University of Warsaw, Poland)
Models of Truth Theories
ABSTRACT. The commitments of a theory can be roughly understood as what the theory ''really says.'' In our research, we are specifically interested in truth theories. There are several ways found in literature to define the situation in which one theory of truth Th_1 ''says more'' than another theory Th_2. In particular, requiring that Th_2 is simply contained in Th_1 (i.e., proves no more theorems than Th_1) may often be viewed as an overly crude measure.
In our talk, we will discuss the notion of the semantic strength of truth theories. Let Th_1, Th_2 be two theories of truth over Peano Arithmetic PA (i.e., theories obtained by adding to PA a unary predicate T(x) which is assumed to satisfy some axioms which are likely to hold for the truth predicate, like compositionality). We say that Th_1 is semantically stronger than Th_2 if for every model M of \PA if there exists an expansion (M,T) satisfying Th_1, then there also exists an expansion (M,T') satisfying Th_2. In other words, if in a model M we can distinguish a set of sentences which satisfies the axioms of Th_1, then there is a set of sentences (possibly a different one) satisfying the axioms of Th_2.
There are some obvious links between semantic strength and other methods of comparing truth theories. First of all, if in Th_1 we can define some other truth predicate satisfying the axioms Th_2 leaving definitions of arithmetical operations unchanged (in which case we say that Th_2 is relatively definable in Th_1), then Th_1 is semantically stronger than Th_2. Furthermore, if Th_1 is semantically stronger than Th_2, then every consequence of Th_2 in the language of arithmetic (or, in general, in the language of the base theory), can also be proved in Th_1 (so Th_1 is stronger than Th_2 in the sense of arithmetical consequences).
In our research, we tried to fine-tune the comparison of strength between theories of the same arithmetical consequences by investigating their models. It turns out that all known natural theories whose semantic strength we managed to understand are actually comparable with one another, even if it is not initially clear from the description of the theory.
Probably the most striking example of this comparability phenomenon is that CT^-, the theory of purely compositional truth without induction for the formulae containing the truth predicate is semantically stronger than UTB, a theory of Tarski biconditionals for arithmetical formulae with full induction for the truth predicate.
Another aspect of philosophical interest are theories of truth which are semantically conservative. A theory Th is semantically conservative over its base theory (in our case, PA) if for every model M of PA, we can find a subset T of its domain such that (M,T) satisfies Th. This means that the theory Th does not put any restrictions whatsoever on the shape and structural properties of the underlying model of the base theory. It does not ''say'' more than the base theory in the sense of semantic strength. We will discuss some examples of well-behaved theories which are semantically conservative over PA as well as some striking classical non-examples, including CT^-.
Some semantic properties of typed axiomatic truth theories built over theory of sets
ABSTRACT. The study of axiomatic truth theories over set theoretical base theories was pioneered by [Krajewski(1976)] who proved the conservativity of CT^-[ZF] over ZF.Many years later, his conservativity result was independently refined by [Enayat and Visser(2018)] and [Fujimoto(2012)] so as to yield the conservativity of the much stronger theory $CT^-[ZF]+Sep^+$ over ZF, where Sep^+ is the natural extension of the separation scheme to formulae with the truth predicate.
In our talk, we will focus on the semantic (model-theoretic) properties of theories of the truth predicate taken with set theory ZF or ZFC taken as the base theory.
The model-theoretic study of truth theories was initiated in the classical papers [Krajewski(1976)] (over arbitrary base theories that include PA and
ZF) and [Kotlarski et al.(1981)Kotlarski, Krajewski, and Lachlan] (over PA
as the base theory). Soon thereafter, in a remarkable paper [Lachlan(1981)], it was shown that if a nonstandard model M of PA is expandable to a model of CT^-[PA], then M is recursively saturated. It can be proved that the same result holds for $\omega$-nonstandard models of ZF, so consequences of Lachlan's theorem impliy that not every model of PA (ZF) is expandable to a model of the compositional truth theory $CT^-[PA] (CT^-[ZF]). The above imply together that if $M \models \ZFC$ is a countable $\omega$-nonstandard model, then the following are equivalent:
\begin{enumerate}
\item M admits an expansion to a model $(M, Tr) \models {\CT}^{-}[\ZF]$.
\item $\mathcal{M}$ is recursively saturated.
\end{enumerate}
This characterisation, taken together with remarkable construction of [Gitman and Hamkins(2010)] shows that for the class of countable $\omega$-nonstandard models of set theory admitting a compositional truth predicate is equivalent to belonging to the so-called natural model of the Multiverse Axioms. During the talk we intend to demonstrate the abovementioned results in more detail and explore their philosophical dimensions.
Last, but not least, we will also show some relevant (to the topic of the symposium) properties of models of the so-called disquotational theories of truth (such as the so-called locally disquotational theory TB) over set theories, which has some philosophical implications in the debate on deflationism w.r.t. the concept of mathematical truth.
References
[Enayat and Visser(2018)] Ali Enayat and Albert Visser. Full satisfaction classess in a general setting. Unpublished manuscript, 2018.
[Fujimoto(2012)] Kentaro Fujimoto. Classes and truths in set theory. Annals of Pure and Applied Logic, 163:1484-1523, 2012.
[Gitman and Hamkins(2010)] Victoria Gitman and Joel David Hamkins. A natural model
of the multiverse axioms. Notre Dame Journal of Formal Logic, 51, 10 2010.
[Kotlarski et al.(1981)Kotlarski, Krajewski, and Lachlan] Henryk Kotlarski, Stanislaw Krajewski, and Alistair Lachlan. Construction of satisfaction classes for nonstandard
models. Canadian Mathematical Bulletin, 24(3):283-293, 1981.
[Krajewski(1976)] Stanislaw Krajewski. Non-standard satisfaction classes. In W. Marek
et al., editor, Set Theory and Hierarchy Theory. A Memorial Tribute to Andrzej
Mostowski, pages 121-144. Springer, 1976.
[Lachlan(1981)] Alistair H. Lachlan. Full satisfaction classes and recursive saturation. Canadian Mathematical Bulletin, 24:295-297, 1981.
ABSTRACT. If Oswald had not Killed Kennedy, someone else would have. What do we learn when we learn such a subjunctive conditional? To evaluate the conditional, Stalnaker (1968) proposes, we move to the most similar possible world from our actual beliefs where Oswald did not kill Kennedy, and check whether someone else killed Kennedy. Based on Stalnaker's semantics, Günther (2018) has developed a method of how a rational agent learns indicative conditional information. Roughly, an agent learns 'if A then B' by imaging on the proposition expressed by the corresponding Stalnaker conditional. He goes on to generalize Lewis's (1976) updating method called imaging to Jeffrey imaging. This makes the method applicable to the learning of uncertain conditional information. For example, the method generates the seemingly correct predictions for Van Fraassen's (1981) Judy Benjamin Problem.
To the best of our knowledge, there is no theory for the learning of subjunctive conditional information. Psychologists of reasoning and philosophers alike have almost only tackled the learning of indicative conditionals. (See, for example, Evans and Over (2004), Oaksford and Chater (2007) and Douven (2015).) Here, we aim to extend Günther's method to cover the learning of information as encoded by subjunctive conditionals.
On first sight, Günther's method of learning indicative conditional information seems to be applicable to the learning of subjunctive conditionals. From ''If Oswald had not killed Kennedy, someone else would have'' you learn that the most similar world in which Oswald did not kill Kennedy, is a world in which someone else did. However, it is widely agreed upon that the meaning of this subjunctive is different from its corresponding indicative conditional ''If Oswald did not kill Kennedy, someone else did''. You can reject the former, while you accept the latter.
The pair of Oswald-Kennedy conditionals suggest that we might learn different propositions. More specifically, the propositional content of a subjunctive might differ from its corresponding indicative conditional. To account for the difference, we aim to amend Stalnaker's possible worlds semantics for conditionals. The idea is that the mood of the conditional may influence which world is judged to be the most similar antecedent world. In the case of indicative conditionals, the world we move to is just the most similar antecedent world to the actual world. In the case of subjunctive conditionals, the world we move to is the most similar antecedent world to the actual world as it has been immediately before the time the antecedent refers to. The evaluation of subjunctives thus involves mental time travel, while the evaluation of indicatives does not. (The idea to fix the past up until the time to which the antecedent refers can be traced back to Lewis (1973). We will address the accounts of antecedent reference due to Bennett (1974), Davis (1979) and, more recently, Khoo (2017).) As a consequence, when you move to the most similar antecedent world in the subjunctive case, you are not restricted by the facts of the actual world in-between the reference time of the antecedent and the now. We show how this simple amendment to Stalnaker's semantics allows to extend Günther's method to the learning of subjunctive conditionals.
Bennett, J. (1974). Counterfactuals and Possible Worlds. Canadian Journal of Philosophy 4(December): 381–402.
Davis, W. A. (1979). Indicative and Subjunctive Conditionals. Philosophical Review 88(4): 544–564.
Douven, I. (2015). The Epistemology of Indicative Conditionals: Formal and Empirical Approaches. Cambridge University Press.
Evans, J. S. B. T. and Over, D. E. (2004). If. Oxford: Oxford University Press.
Günther, M. (2018). Learning Conditional Information by Jeffrey Imaging on Stalnaker Conditionals. Journal of Philosophical Logic 47(5): 851–876.
Khoo, J. (2017). Backtracking Counterfactuals Revisited. Mind 126(503): 841–
910.
Lewis, D. (1973). Causation. Journal of Philosophy 70(17): 556–567.
Lewis, D. (1976). Probabilities of Conditionals and Conditional Probabilities. The Philosophical Review 85(3): 297–315.
Oaksford, M. and Chater, N. (2007). Bayesian Rationality: The Probabilistic Approach to Human Reasoning. Oxford Cognitive Science Series, Oxford, New York: Oxford University Press.
Stalnaker, R. (1968). A Theory of Conditionals. In: Studies in Logical Theory
(American Philosophical Quarterly Monograph Series), edited by N. Rescher, no. 2, Oxford: Blackwell. pp. 98–112.
Van Fraassen, B. (1981). A Problem for Relative Information Minimizers in
Probability Kinematics. The British Journal for the Philosophy of Science 32(4): 375–379.
17:15
Sergey Ospichev (Sobolev Institute of Mathematics, Novosibirsk State University, Russia) Denis Ponomaryov (A.P. Ershov Institute of Informatics Systems, Sobolev Institute of Mathematics, Novosibirsk State University, Russia)
On the complexity of formulas in semantic programming
ABSTRACT. We study the algorithmic complexity of hereditarily finite list extensions of structures \cite{bib0}. The generalized computability theory based on $\Sigma$-definability, which has been developed by Yuri Ershov \cite{Ershov1996} and Jon Barwise \cite{Barwise1975}, considers hereditarily finite extensions consisting of hereditarily finite sets. In the papers by Yuri Ershov, Sergei Goncharov, and Dmitry Sviridenko \cite{bib1,bib4} a theory of hereditarily finite extensions has been developed, which rests on the concept of Semantic Programming. In the paradigm of Semantic Programming, a program is specified by a $\Sigma$-formula in a suitable superstructure of finite lists. Two different types of implementation of logic programs on the basis of $\Sigma$-definability have been considered \cite{bib7}. The first one is based on deciding the truth of $\Sigma$-formulas corresponding to the program in the constructed model. The second approach is based on the axiomatic definition of the theory of the list superstructure. Both of these approaches raise the natural question of how fast one can compute a program represented by $\Sigma$-formulas. In the recent papers \cite{bib7,bib8} Sergey Goncharov and Dmitry Sviridenko constructed conservative enrichment of language of bounded quantifiers by conditional and recursive terms and have put a hypotheses that in case the base model $\mathcal{M}$ is polynomially computable then deciding the truth of a given $\Delta_0$-formula in this enriched language in a hereditarily finite list extension of $\mathcal{M}$ has polynomial complexity. Here we confirm these hypotheses and consider the complexity of this problem for a number of natural restrictions on $\Delta_0$-formulas.
\begin{thebibliography}{99}
\bibitem{bib0}
\textit{Ospichev S. and Ponomarev D.}, On the complexity of formulas in semantic programming, Sib. Electr. Math. Reports, vol. 15, 987--995 (2018)
\bibitem{Ershov1996}
\textit{Ershov Yu. L.}, Definability and computability. Consultants Bureau, New York (1996)
\bibitem{Barwise1975}
\textit{Barwise, J.}, Admissible sets and structures. Springer, Berlin (1975)
\bibitem{bib1}
\textit{Goncharov S. S. and Sviridenko D. I.}, $\Sigma$-programming, Transl. II. Ser., Amer. Math. Soc., no. 142, 101--121 (1989).
\bibitem{bib4}
\textit{Ershov Yu. L., Goncharov S. S., and Sviridenko D. I.},
Semantic Programming, in: Information processing 86: Proc. IFIP 10th World Comput. Congress. Vol. 10, Elsevier Sci., Dublin, 1093--1100 (1986).
\bibitem{bib7}
\textit{Goncharov S. S.}, Conditional Terms in Semantic Programming, Siberian Mathematical Journal, vol. 58, no. 5, 794--800 (2017).
\bibitem{bib8}
\textit{Goncharov S. S. and Sviridenko D. I.}, Recursive Terms in Semantic Programming, Siberian Mathematical Journal, vol. 59, no. 6, 1279--1290 (2018).
\end{thebibliography}
17:45
Marie Duzi (VSB-Technical University Ostrava, Czechia)
Hyperintensions as abstract procedures
ABSTRACT. Hyperintensions are defined in my background theory Transparent Intensional Logic (TIL) as abstract procedures that are encoded by natural-language terms. For rigorous definition of a hyperintensional context it is crucial to distinguish two basic modes in which a given procedure can occur, namely displayed and executed. When a procedure C occurs displayed, then C itself figures as an object on which other procedures operate; while if C occurs executed, then the product of C figures as an object to operate on. Procedures are structured wholes consisting of unambiguously determined parts, which are those sub-procedures that occur in execution mode. They are not mere set-theoretic aggregates of their parts, because constituents of a molecular procedure interact in the process of producing an object. Furthermore, this part-whole relation is a partial order. On the other hand, the mereology of abstract procedures is non-classical, because the principles of extensionality and idempotence do not hold.
A hyperintensional context is the context of a displayed procedure, and it is easy to block various invalid inferences, because different procedures can produce one and the same function-in-extension, be it mathematical mapping or PWS-intension. But blocking invalid inferences in hyperintensional contexts is just the starting point. There is the other side of the coin, which is the positive topic of which inferences should be validated and how these valid inferences should be proved. The problem is this. A displayed procedure is a closed object that is not amenable to logical operations. To solve this technical difficulty, we have developed a substitution method that makes it possible to operate on displayed procedures within and inside a hyperintensional context. Having defined the substitution method we are in a position to specify beta-conversion by ‘value’. I am going to prove that unlike beta-conversion by name the conversion by value is validly applicable so that the redex and contractum are logically equivalent. However, though TIL is a typed lambda-calculus, the Church-Rosser theorem is not valid for beta-reduction by name, which is so due to hyperintensional contexts. Hence, I am going to specify a fragment of TIL for which the theorem is valid. On the other hand, the Church-Rosser theorem is valid in TIL for beta-reduction by value. Yet, in this case the problem is just postponed to the evaluation phase and concerns the choice of a proper evaluation strategy. To this end we have implemented the algorithm of context recognition that makes it possible to evaluate beta-conversion by value in a proper way so that the Church-Rosser Theorem is valid in all kinds of a context. There are still other open issues concerning the metatheory of TIL; one of them is the problem of completeness. Being a hyperintensional lambda-calculus based on the ramified theory of types, TIL is an incomplete system in Gödel’s sense, of course. Yet, I am going to specify a fragment of TIL that is complete in a Henkin-like way.
A theorem of ordinary mathematics equivalent to $\mathsf{ADS}$
ABSTRACT. In 1980 Ivan Rival and Bill Sands \cite{Rival} proved that for each infinite poset $P$ with finite width (i.e. such that there is a finite bound on the size of each antichain in $P$) there is an infinite chain $C$ such that each vertex of $P$ is comparable to none or to infinitely many vertices of $C$. We are interested in analysing the strength of this statement, $\mathsf{RS}$-$\mathsf{po}$, restricted to countable posets, from the point of view of reverse mathematics.
We proved that
\begin{theorem}
$\mathsf{RS}$-$\mathsf{po}$ restricted to partial orders of width three is equivalent to $\mathsf{ADS}$ over the base theory $\mathsf{RCA}_0$.\end{theorem}
We think that this result is interesting for at least two reasons:
\begin{enumerate}
\item as far as we know, $\mathsf{RS}$-$\mathsf{po}$, restricted to partial orders of width three, is the first theorem of ordinary mathematics proved to be equivalent to $\mathsf{ADS}$. In reverse mathematics $\mathsf{ADS}$ received attention as an easy consequence of Ramsey theorem, which is nonetheless strictly weaker than Ramsey theorem for pairs, but nor computably true nor provable from $\mathsf{WKL}_0$. $\mathsf{ADS}$ shares this behaviour with many other statements, which are quite close, yet non equivalent, one to the other. This behaviour contrasts with that of the so called Big Five of reverse mathematics, which are characterized by a sort of robustness and by the equivalence to numerous theorems from different areas of mathematics;
\item the original proof of $\mathsf{RS}$-$\mathsf{po}$ goes trough in $\Pi_1^1$-$\mathsf{CA}_0$. In fact, the proof in $\mathsf{ADS}$ is entirely different from the original one, which is on the other hand more general since it takes care of posets of arbitrary cardinality. Our proof requires a much more careful construction of the chains which witness that an ascending or descending chain is not a solution of $\mathsf{RS}$-$\mathsf{po}$.
\end{enumerate}
We suspect that $\mathsf{ADS}$ is also equivalent to $\mathsf{RS}$-$\mathsf{po}$ for posets of width $n$, for each $n\ge 3$. However, it is likely that the combinatorial complexity of the proof grows with $n$. On the other hand, we already know that $\mathsf{ADS}$ implies $\mathsf{RS}$-$\mathsf{po}$ for posets of width two, but we do not have any reversal to $\mathsf{ADS}$ in this case. Thus $\mathsf{RS}$-$\mathsf{po}$ for posets of width two may be a weaker principle.
ABSTRACT. The method of possible-translation semantics was introduced by Walter Carnielli in [3]. The core idea of this method is to purport a logic with a multivalued semantics. It was applied to a variant of Cn calculi by the same author in [2] and later on to Cn itself by Jõao Marcos in [5].
One consequence of the application of this method is the recovery of truth-functionality for logics whose matrices are non-finitely determinate. This is particular important for the calculus C1. This property has been useful for the development of a tableaux system for C1 that we have called Ms
Our aim in this talk is to explore a possible non-trivial extension of our system for the hierarchy of calculi Cn. We will compare this system with those offered in [1], [4] and [5]. The comparision aims at showing the adventages of this system for considering it a viable alternative
[1] Buchsbaum, A. and Pequeño, T. “A reasoning method for a paraconsistent logic”, Studia Logica, 1993, v. 52, p. 281-289.
[2] Carnielli, W. “Possible translation semantics for paraconsistent logics”. Frontiers in paraconsistent logic: proceedings, 1999, London, King’s College Publications, p. 116-139.
[3] Carnielli, W. “Many-valued logics and plausible reasoning”. In Proceedings of the International Symposium on Multiple-valued logic, 19990, Charlotte, North Carolina, USA, pp. 226-235.
[4]D’Ottaviano, I.M.L., De Castro, M.A. “Analytical tableaux for da Costa’s hierarchy of paraconsistent logics Cn”, Journal of Applied Non-Classical Logics, 2005, París, v.15, n.1, p. 69-103.
[5] Marconi, D. “A decision method for the Calculus C1”, in Arruda, A.I. Costa N.C.A. da, Sette, A. (Eds.), 1980, Proceedings of the 3rd Brazilian Conference on Mathematical Logic, Sao Paulo, pp. 211-223.
[6] Marcos, J. Semânticas de Traduções Possíveis (master thesis). São Paulo, UNICAMP, 1999.
ABSTRACT. Dstit (deliberately seeing to it that) is an agentive modality usually semantically defined on a tree-like structure of successive moments. Any maximal sequence of moments forms a history, with individual moments parts of different histories, but all histories sharing some moment. The tree has forward branching time (BT), corresponding to indeterminacy of the future, but no backward branching, corresponding to uniqueness of the past, and is enriched by agent choice (AC). Choice is a function mapping an agent/moment pair to a partition of all histories passing through that moment (since an agent’s choice determines what history comes about only to an extent).
In such (BT+AC) frames, formulas are evaluated at moments in histories. Specifically, an agent a deliberately seeing to it that A holds at the moment m of a history h holds iff (i) A holds in all histories choice-equivalent to h for the agent a, but (ii) doesn’t hold in at least one history that the moment m is a part of. In simple terms, the agent sees to it that A if their choice brings about those histories where A holds, but nonetheless it could have been otherwise (i.e. an agent can’t bring about something that would have happened anyway).
Stit modalities, including Dstit, received an extensive axiomatic treatment in [1], and the proof-theoretic approaches to it so far have been done via labelled tableaux [5, 6].
In contrast, in this paper we investigate Dstit modality by means of a sequent calculus. Following [2, 3], we employ the axioms-as-rules approach, and develop a G3-style labelled sequent calculus. This is shown to possess all the desired structural properties of a good proof system, including being contraction- and cut-free.
Moreover, we demonstrate multiple applications of the system. We prove the impossibility of delegation of tasks among independent agents, the interdefinability of Dstit with an agent-relative modality, cstit, and an agent-independent modality, settled true, as well as the treatment of refraining from [1] and [4]. Finally, we demonstrate the metatheoretical properties of our system, namely soundness, completeness and decidability via a bounded proof search.
References
[1] N.D. Belnap, M. Perloff, M. Xu, Facing the Future: Agents and Choices in Our Indeterminist World, Oxford: Oxford University Press, 2001.
[2] S. Negri, J. von Plato, Structural Proof Theory, Cambridge: Cambridge University Press, 2001.
[3] S. Negri, J. von Plato, Proof Analysis, Cambridge: Cambridge University Press, 2011.
[4] G.H. von Wright, Norm and Action: A Logical Enquiry, Routledge & Kegan Paul, 1963.
[5] H. Wansing, Tableaux for multi-agent deliberative-stit logic, in G. Governatori, I.M. Hodkinson, Y. Venema (eds.), Advances in Modal Logic 6, pp. 503-520, 2006.
[6] G.K. Olkhovikov, H. Wansing, An Axiomatic System and a Tableau Calculus for STIT Imagination Logic, Journal of Philosophical Logic, 47(2), pp. 259-279, 2018.
17:15
Mahfuz Rahman Ansari (INDIAN INSTITUTE OF TECHNOLOGY, KANPUR, India) Avr Sarma (INDIAN INSTITUTE OF TECHNOLOGY, KANPUR, India)
ABSTRACT. A rational agent creates counterfactual alternatives when they act on actions concerning “if..then'', “what if'' and constantly assess how the past actions could have been different. For instance, we create counterfactual alternatives that change an undesirable action to be like a desirable one. For instance, if she had put up her usual effort she would have done better in the exam. The capacity to extract causal knowledge from the environment allows us to predict future events and to use those predictions to decide on a future course of action. Counterfactual conditionals are the special kind of subjunctive conditionals that facilitate us to explore other alternatives, in which the antecedent is typically considered to be false. Since counterfactuals are one of the best means of talking about unrealized possibilities, we find its prominence in the understanding of causation, laws, planning, and the reasoning about action. Analysis of various kinds of counterfactuals and its role in the common sense reasoning, mainly the semantics of counterfactuals have attracted the attention of philosophers from antiquity. In this paper, we restrict our self to counterfactuals involving action. We argue that the prominent approach concerning semantics for counterfactuals due to David Lewis has been challenged on the basis of unlikely, or impossible, events. The present study will be an attempt to link the metaphysical conception of possible-world semantics, and the role of action in the evaluation of counterfactuals. We present an extended version of Lewis-Stalnaker semantics that deals with action counterfactuals.
References:
1. Ginsberg, M. L. (1986). "Counterfactuals". Artificial Intelligence, 30: 35–79.
2. Lent, J., & Thomason, R. H. (2015). Action models for conditionals. Journal of
Logic, Language, and Information, 24(2), 211–231.
3. Lewis, D. (1973) Counterfactuals, Cambridge, MA: Harvard University Press.
Reissued London: Blackwell, 2001.
4. Nute, Donald. “Counterfactuals and the Similarity of Words.” The Journal of
Philosophy, vol. 72, no. 21, 1975, pp. 773–778.
5. Pearl, J. (1996), “Causation, Action and Counterfactuals,” in Proceedings of
the Sixth Conference on Theoretical Aspects of Rationality and Knowledge, ed.
Y. Shoham, San Francisco, CA: Morgan Kaufman, pp. 57–73.
6. Stalnaker, R., 1968. ‘A Theory of Conditionals’ in Studies in Logical Theory,
American Philosophical Quarterly Monograph Series, 2. Oxford: Blackwell, pp.
98–112.
7. Thomason, Richmond, “Conditionals and Action Logics”, in AAAI 2007 Spring
Symposium on Commonsense Reasoning, Eyal Amir, Vladimir Lifschiz and Rob
Miller, eds., Menlo Park, California: AAAI Press, 156–161.
8. Zhang, J. (2013). A Lewisian logic of causal counterfactuals. Minds and
Machines, 23, 77–93.
17:45
Ionel Narita (West University of Timisoara, Romania)
Logic of Scales
ABSTRACT. We mean by „scale” a set of predicates so that:
1) All predicates of a same scale are contrary each to another.
2) If g does not belong to the scale S, then there is an f in S so that f and g are compatible.
3) The predicates of a scale, S, are complementary, namely, any would be an object, a, there is an element of S, for instance f, so that it takes place fa.
The scales, as systems of predicates, exists. For any predicate there is a negative predicate, namely, to the predicate f corresponds the predicate non-f, or f*. The set {f, f*} is a scale since its elements are contrary and complementary predicates, hence, the conditions 1-3 are fulfilled.
Let F = {f1, f2, …, fn} be a set of contrary predicates. In this case, the set S ={f1, f2, …, fn, O}, where O = (f1 v f2 v … v fn)*, is a scale. Indeed, it is easy to prove that O is contrary relatively to any fi predicate. Moreover, if g is contrary to every fi predicates, then g is compatible to O. On the other side, the elements of S are complementary. Any object satisfies a predicate from F or not. In this last case, it will satisfy the predicate O, therefore any object satisfies a predicate from S.
We call the predicate O the origin of the scale S and the predicate O*, namely, the predicate F = f1 v f2 v … v fn, is the genus predicate of the scale S. We can easily notice that the set which includes only the predicates O and F is a scale. In this way, every scale can be represented using its origin and its genus. The genus and the origin of a scale are contradictory, namely, F = O*.
If {F, O} is a scale, and F = (f1 v f2 v … v fn), then, for every predicate fi it takes place (x)(fx - Fx). Therefore, there must be a set of operators, xi, so that fi = xiF. We will call these operators numbers. It follows that a predicate can be analyzed using a genus and a number.
An object satisfies to a given moment a single predicate from a scale. If t1 and t2 are two different moments and f1 and f2 are predicates of the same scale, S, then it takes place: E = (f1, t1)a & (f2, t2)a.
We will introduce the following convention: E =not (f1, f2)(t1, t2)a. The ordered pair (f1, f2) represent a change relatively to the scale S. We have got the result that the elements of the Cartesian square S x S = S2 are all possible changes definable upon the elements of the scale S. On the other hand, the function h: S - S, which assigns to any element of S an element and only one from S, is a transformation inside of the scale S. We may now introduce new kinds of propositions, like the propositions of change and the propositions of transformation having a specific logic.
What is it Like to Be First Order? Lessons from Compositionality, Teams and Games
ABSTRACT. Is standard First Order Logic (FOL) with the usual Tarskian semantics the strongest possible finitary logic that permits quantification only over individuals? As pointed out by Hintikka and Sandu, building on Henkin's work on Branching Quantifiers, the answer is arguably negative. Indeed, it is possible to add to FOL "slashed quantifiers" $(\exists y / x)$, meaning "there is some $y$, chosen independently from $x$". Such quantifiers can be found both in mathematical practice and in ordinary language; and the formalism obtained by adding them to FOL, called Independence-Friendly Logic (IF-Logic for short), is more expressive than FOL and has properties that are different from those of FOL but are nonetheless very interesting (for example, the law of the excluded middle fails in IF-Logic; but on the other hand, IF-Logic can define its own truth predicate).
The original semantics for IF-Logic is a variant of the Game-Theoretic Semantics (GTS) of FOL; and, later, Hodges showed how Tarskian Semantics may be also adapted to it, essentially by allowing formulas to be satisfied or not satisfied by sets of assignments rather than by single assignments. This transition from single assignments to sets of assignments cannot be however avoided without losing compositionality. This result thus exposes the higher order quantification hidden in GTS's truth definition: indeed, even though the semantic games of the GTS for IF Logic or for FOL involve merely agents picking elements from the model, truth in GTS (for both IF Logic and FOL) is defined in terms of the existence of a winning strategy. And a strategy, in general, is not a first order individual.
Later, Vääanänen studied in detail the properties of Hodges' semantics, observing in particular that slashed quantifiers can be replaced with "dependence atoms" whose semantics corresponds precisely to database-theoretic functional dependencies. This sparked a programme of exploration of the logics obtained by adding other types of dependencies to Hodges' semantics (called also "Team Semantics") - or, equivalently, by adding the corresponding constraints over strategies the GTS of FOL. Aside from being of considerable theoretical interest, the resulting formalisms have seen applications to social choice theory, database theory, doxastic dynamics, physics, and more recently even to the study of causality.
However, it is not always the case that adding (in)dependency notions/operators to FOL (or, equivalently, adding strategy constraints to its GTS) brings the expressive power of the resulting formalism beyond the mere first order: for some of these extensions of FOL, like for FOL proper, the higher order quantification implicit in the definition of truth in GTS is merely apparent. In this talk I will discuss a number of results, from recent and not-so-recent work, that attempt to characterize such extensions; or, in other words, that explore the boundary between first order and higher order logics "from below", by adding more and more possible constraints over strategies to the GTS of FOL without plunging the entire formalism into higher order territory.
17:15
Ivo Pezlar (The Czech Academy of Sciences, Institute of Philosophy, Czechia)
Analysis of Incorrect Proofs
ABSTRACT. Analysis of proofs is an open and ongoing topic in general proof theory. The focus of interest is, however, often limited to correct proofs only. In this talk, we outline a general framework that can analyze both correct and incorrect proofs.
As a motivating example, consider, e.g., the derivation of “A” from “A implies A” and “A”. Judging by this information alone, it certainly seems as a correct derivation certified by the implication elimination rule (modus ponens). But what if this derivation was not carried out using the implication elimination rule, but the via the formal fallacy known as affirming the consequent, which has the general form: from “A implies B” and “B” is inferred “A”. The standard approaches to proof analysis based around the Curry-Howard isomorphism (where proofs, or rather, proof terms are understood as typed lambda terms; see e.g., Sørensen & Urzyczyn 2006, Negri & von Plato 2011) are not well equipped to adequately analyze this situation. Our goal will be to sketch the basis of a logic of incorrect proofs capable of analyzing scenarios such as the one described above. More specifically, we would like to treat even incorrect proofs as first-class objects of our theory: we should be able to reason about them, talk about them or use them as arguments for functions, etc. Rather than to try to modify the current approaches to include even incorrect proofs, we will develop a framework that expects the possibility of incorrect proofs from the beginning. Our background formalism of choice will be Transparent Intensional Logic (TIL; Tichý 1988, Duží et al. 2010, Raclavský et al. 2015). We choose it because it is a highly expressive system, built from ground up around partial functions and neo-Fregean procedural semantics, both of which will be instrumental for our analysis of incorrect proofs.
In this talk, we will: i) introduce and demonstrate TIL-based proof analysis (how does it compare to the current mainstream frameworks for proof analysis?), and ii) propose how to analyze incorrect proofs (what is the semantics of incorrect proofs?).
References:
Duží, Marie, Jespersen, Bjørn, Materna, Pavel. Procedural Semantics for Hyperintensional Logic: Foundations and Applications of Transparent Intensional Logic. Dordrecht: Springer, 2010.
Negri, Sara, von Plato, Jan. Proof Analysis: A Contribution to Hilbert's Last Problem. New York: Cambridge University Press, 2011.
Raclavsky, Jiří, Kuchyňka, Petr, Pezlar, Ivo. Transparent Intensional Logic as Characteristica Universalis and Calculus Ratiocinator [in Czech]. Brno: Masaryk University Press, 2015.
Sørensen, Morten Heine, Urzyczyn, Pawel. Lectures on the Curry-Howard Isomorphism, Volume 149. New York: Elsevier Science Inc. 2006.
Tichý, Pavel. The Foundations of Frege’s Logic. Berlin: de Gruyter, 1988.
The congress dinner will be served at the Plzenska restaurant located in the basement of the Municipal House in the historical centre of Prague. Art Nouveau Plzenska restaurant is a place which mingles traditional Czech cuisine with an unique interiror from the early twenties (opened in 1912). The interior was decorated by the best Czech painters and artists of the time.
The price includes three courses menu and accompanying wine, beer and soft drinks.