previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 7: Plenary talk
Location: Hörsal 2 (A2)
Surreal differential calculus

ABSTRACT. I will report on joint surreal work with Vincenzo Mantova. We recall that Conway's ordered field of surreal numbers contains both the real numbers and the ordinal numbers. The surreal sum and product of two ordinals coincide with the Hessenberg sum and product, and Cantor's normal form of ordinals has a natural extension to the surreals. In [1] we proved that there is a meaningful way to take both the derivative and the integral (anti-derivative) of a surreal number, hence in particular on an ordinal number. The derivative of the ordinal number omega is 1, the derivative of a real number is zero, and the derivative of the sum and product of two surreal numbers obeys the expected rules. More difficult is to understand what is the derivative of an ordinal power of omega, for instance the first epsilon-number, but this can be done in a way that reflects the formal properties of the derivation on a Hardy field (germs of non-oscillating real functions). In [2] we showed that many surreal numbers can indeed be interpreted as germs of differentiable functions on the surreals themselves, so that the derivative acquires the usual analytic meaning as a limit. It is still open whether we can interpret all the surreals as differentiable functions, possibly changing the definition of the derivative.

[1] Alessandro Berarducci, Vincenzo Mantova, Surreal numbers, derivations and transseries, arXiv:1503.00315, pp.47. To appear in the Journal of the European Mathematical Society.

[2] --, Transseries as germs of surreal functions, arXiv:1703.01995, pp. 44.

10:30-11:30 Session 8: Plenary talk
Location: Hörsal 2 (A2)
Stone duality and applications in computer science (2/3)
SPEAKER: Mai Gherke

ABSTRACT. This will be a three part tutorial: 1. General introduction to Stone duality 2. Applications in semantics 3. Applications in formal languages: automata and beyond

11:30-12:30 Session 9: Plenary talk
Location: Hörsal 2 (A2)
Computability theory and asymptotic density

ABSTRACT. The notion of generic-case complexity was introduced by Kapovich, Myasnikov, Schupp, and Shpilrain to study problems with high worst-case complexity that are nevertheless easy to solve in most instances. They also introduced the notion of generic computability, which captures the idea of having a partial algorithm that halts for almost all inputs, and correctly computes a decision problem whenever it halts. Jockusch and Schupp began the general computability-theoretic investigation of generic computability and also defined the notion of coarse computability, which captures the idea of having a total algorithm that might make mistakes, but correctly decides the given problem for almost all inputs (although this notion had been studied earlier in Terwijn's dissertation). Two related notions, which allow for both failures to answer and mistakes, have been studied by Astor, Hirschfeldt, and Jockusch (although one of them had been considered in the 1970's by Meyer and by Lynch). All of these notions lead to notions of reducibility and associated degree structures. I will discuss recent and ongoing work in the study of these reducibilities.

14:00-15:30 Session 10A: Computability
Location: Hörsal 11 (F11)
Generalized Finite Automata over the Real Numbers
SPEAKER: Klaus Meer

ABSTRACT. Gandhi, Khoussainov, and Liu introduced and studied a generalized model of finite automata able to work over arbitrary structures. The model mimics finite automata over finite structures, but has an additional ability to perform in a restricted way operations attached to the structure under consideration. As one relevant area of investigations for this model Gandhi et al. identified studying the new automata over uncountable structures such as the real and complex numbers.

In the talk we pick up this suggestion and consider their automata model as a finite automata variant in the BSS model of real number computation. We study structural properties as well as (un-)decidability results for several questions inspired by the classical finite automata model.

Reducibilities and Higman-like theorems in symbolic dynamics

ABSTRACT. Symbolic Dynamics is the study of subshifts, sets of infinite words given by local constraints. Subshifts constitute a shift-invariant version of Pi-1-0 classes of sets and are intimately linked to automata theory in dimension one and tiling theory in higher dimensions.

One of its distinguished feature is that subshifts of finite type, which are equivalent to tilings of the discrete space by Wang tiles, already exhibit a large range of uncomputable behaviours, as evidence by Berger in the 60s and popularized by Robinson and his so-called Robinson tiling in the 70s.

While these results could be deemed negative, a recent approach due to Hochman show that various quantities and invariants defined for subshifts can be completely understood and characterized using various concepts from computability theory.

The goal of this talk is to show a striking resemblance between these recent results and the embedding theorems pioneered by Higman in the 60s for combinatorial group theory. To do this, I will present a framework in which these theorems can be written using the exact same vocabulary, and show how the easy part of the theorems follow from the exact same proof.

Applications of computability theory in topology
SPEAKER: Arno Pauly

ABSTRACT. I will discuss how computability-theoretic methods can be used to prove theorems in topology.

14:00-15:30 Session 10B: Model theory
Location: Hörsal 4 (B4)
Enriching our view of model theory of fields with operators
SPEAKER: Ivan Tomasic

ABSTRACT. A naive approach to developing the methods of homological algebra for difference and differential fields, rings and modules quickly encounters numerous obstacles, such as the failure of the hom-tensor duality.

We will describe a categorical framework that overcomes these difficulties, allowing us to transfer most classical techniques over to the difference/differential context.

We will conclude by applying these techniques to study the cohomology of difference algebraic groups and discuss potential model-theoretic consequences.

Non-amenablility of automorphism groups of generic structures

ABSTRACT. A group $G$ is amenable if every $G$-flow has an invariant Borel probability measure. Well-known examples of amenable groups are finite groups, solvable groups and locally compact abelian groups. Kechris, Pestov, and Todorcevic established a very general correspondence which equates a stronger form of amenability, called extreme amenability, of the automorphism group of an ordered Fra\"iss\'e structure with the Ramsey property of its finite substructures. In the same spirit Moore showed a correspondence between the automorphism groups of countable structure and a a structural Ramsey property, which englobes F\o lner's existing treatment. In this talk we will consider automorphism groups of certain Hrushovski's generic structures. We will show that they are not amenable by exhibiting a combinatorial/geometrical criterion which forbids amenability.

Model theory of strongly ergodic actions

ABSTRACT. I will discuss novel applications of continuous logic to ergodic theory, particularly to the study of rigidity phenomena associated with strongly ergodic actions of countable groups. This is joint work in progress with François Le Maître and Todor Tsankov.

14:00-15:30 Session 10C: Philosophical logic
Location: Hörsal 8 (D8)
Squeezing arguments and strong logics

ABSTRACT. G. Kreisel has suggested that squeezing arguments, originally formulated for the informal concept of first order validity, should be extendable to second order logic, although he points out obvious obstacles. We develop this idea in the light of more recent advances and delineate the difficulties across the spectrum of extensions of first order logics by generalised quantifiers and infinitary logics. In particular we argue that if the relevant informal concept is read as informal in the precise sense of being untethered to a particular semantics, then the squeezing argument goes through in the second order case. Consideration of weak forms of Kreisel's squeezing argument leads naturally to reflection principles of set theory. This is joint work with Jouko Väänänen.

How to make an infinite decision
SPEAKER: Davide Rizza

ABSTRACT. See attachment below. I have corresponded with Mirna Dzamonja, who invited me to submit a paper for LC 2017.

On Hilbert’s Axiomatic Method

ABSTRACT. Hilbert’s methodological reflection certainly shaped a new image of the axiomatic method. However, the discussion on the nature of this method is still open. There are (1) those who have seen it as a synthetic method, i.e., a method to derive theorems from axioms already and arbitrarily established; (2) others have counter-argued in favor of its analytical nature, i.e., given a particular scientific field the method is useful to reach the conditions (axioms) for the known results of the field (theorems) and to rightly place both in a well-structured theory; (3) still others underlined the metatheoretical nature of the axiomatic reflection, i.e., the axiomatic method is the method to verify whether axioms already identified satisfy properties such as completeness, independence and consistency. Each of these views has highlighted aspects of the way Hilbert conceived and practiced the axiomatic method, so they can be harmonized into an image better suited to the function the method was called to fulfill: i.e., deepening the foundations of given scientific fields, to recall one of his well-known expressions. Considering some textual evidence from early and late writings, I shall argue that the axiomatic method is in Hilbert’s hands a very flexible tool of inquiry and that to lead analytically to an axiomatic well-structured theory it needs to include dynamically both synthetic procedures and metatheoretical reflections. Therefore, in Hilbert’s concern the expression “deepening the foundations” denotes the whole set of considerations, permitted by the axiomatic method, that allow the theoretician first to identify and then to present systems of axioms for given scientific fields.

16:00-17:00 Session 11A: Contributed talks
Location: D289
On proof complexities for some classes of tautologies in Frege systems

ABSTRACT. We present some results about Frege proof complexities. At first we introduce the notion of specific tautologies and show that Frege systems must have a polynomial size proof for every tautology of size n if the proofs sizes of all specific tautologies of size n are polynomially bounded. Then we show, that all balanced tautologies in disjunctive normal form also have Frege proofs with polynomially bounded sizes. Lastly we give some notes about relations between the proof complexities of tautologies An and Bn and proof complexities of the tautologies in a form An*Bn, where * is ∧,∨ or ⊃.

Genus of proofs as a measure of complexity

ABSTRACT. Carbone (2009,2010) has proposed the genus of the logical flow graph of a proof as a new measure of its complexity. This proposal is interesting as it measures how interconnected part of the proof are, unlike traditional measures which give some measure of the size of the proof (Buss, 1998, p.13). This makes it implausible that this measure could be used to say whether or not a proof is feasible, a traditional goal of such complexity measures. Rather, Carbone (2009, p.139) claims that her proposal may be a better measure of the difficulty of a proof than the more common measures of number of steps or symbols used. This paper intends to assess Carbone's claim. In particular it will assess whether the method she proposes should be preferred to the more traditional measures. For example, whether Carbone's method is sensitive to whether or not a symbol is taken as primitive, or if it offers a way of distinguishing pure from unpure proofs (Arana (2009) identifies these as important roles for a measure of complexity of proofs).

Propositional Dynamic Logic for Bisimilar programs with Parallel Operator and Test

ABSTRACT. This paper proposes a semantics and an axiomatization for PDL that makes logically equivalent programs also bisimilar. This allows for developing Dynamic Logics to reasoning process algebras about specification, in particular about CCS (Calculus for Communicating Systems). As in CCS the bisimulation is the main tool to establish equivalence of programs, it is very important that these two relations coincide. We propose a new Propositional Dynamic Logic with a new non-deterministic choice operator, PDL+. We prove its soundness, completeness, finite model property and EXPTIME-completeness for the satisfiability problem.

We also add to PDL+ the parallel composition operator (PPDL+) and prove its soundness and completeness. We establish that the satisfiability problem for PPDL+ is in 2-EXPTIME. Finally, we define some fragments of PPDL+ and prove its EXPTIME-completeness.

In this work we discuss some issues concerning test and point out some direction on how it can be handle.

16:00-17:00 Session 11B: Contributed talks
Location: D299
Arithmetical transfinite Recursion and Relatives

ABSTRACT. We work in the context of second order arithmetic and consider formal systems centered around the axiom schema (ATR) of arithmetical transfinite recursion and the fixed point schema (FP). Starting from arithmetical comprehension we introduce several schemas that turn out to be equivalent to (ATR) and (FP). More precisely, we consider transfinite recursions and fixed point principles for syntactically richer classes of formulas and a form of transfinite dependent choice. The results are obtained by adapting methods from [1].

[1] Stephen G. Simpson, Subsystems of Second Order Arithmetic, Perspectives in Logic, Cambridge University Press, 2009.

Diagrammatic Reasoning for Boolean Equations

ABSTRACT. Diagrammatic approaches to deductive and formal reasoning have seen a resurgence in recent years. We propose a diagrammatic method for deciding whether Boolean equations over set-valued variables are tautologies or not. Conventional diagrammatic approaches to the above decision problem work reasonably well when the total number of sets under consideration is rather small. However, conventional approaches become cumbersome, if not completely unusable, while dealing with a large number of sets. We devise an algorithm for the above decision problem, and demonstrate that it scales well when the number of set variables in the equations increases rapidly.

Type Theory of Restricted Algorithms and Neural Networks

ABSTRACT. Moschovakis (1994) introduced a new approach to the mathematical concept of algorithm. Moschovakis (2006) extended the approach to a typed version of acyclic recursion, by a formal language, Language of Acyclic Recursion (LAR), equipped with a reduction calculus. LAR represents crucial semantic distinctions in formal and natural languages. We present a development of LAR to Type Theory of Restricted Algorithms (TTofRAlg), as a mathematical theory of the notion of algorithm, by adding a restrictor as an operator. The purpose is to model procedural memory and functionality of biological entities, in particular neurons and neural networks. We introduce TTofRAlg by extending LAR and its reduction calculus.

16:00-16:40 Session 11C: Contributed talks
Location: F289
Describing limits of bounded sequences of measurable functions via nonstandard analysis.

ABSTRACT. In functional analysis, there are different notions of limit for a bounded sequence of L p functions. Besides the pointwise limit, that does not always exists, the behaviour of a bounded sequence of L p functions can be described in terms of its weak (weak-star if p = 1 or p = ∞) limit, or by introducing a measure-valued notion of limit in the sense of Young measures. By using nonstandard analysis, we will show that for every bounded sequence {z n } n∈N of L p functions there exists a function of a hyperfinite domain that simultaneously generalizes the weak, weak-star and Young measure limits of the sequence. Let ∗ R be a set of hyperreals of nonstandard analysis, let ε ∈ ∗ R be a positive infinitesimal, and let X = {nε : n ∈ ∗ Z}. For all open Ω ⊆ R k , let G(Ω) be the vector space of ∗ R-valued functions defined on Ω X = ∗ Ω ∩ X k . We will prove that, for every bounded sequence {z n } n∈N in L p (Ω), there exists a (non unique) function u ∈ G(Ω) such that • u represents the weak (weak-star if p = 1 or p = ∞) limit of the sequence {z n } n∈N in the sense that for all g ∈ C 0 (Ω) it holds °(ε k sum x∈ ∗ Ω∩X k u(x) ∗ g(x) ) = lim n→∞ int Ω z n(x) g(x) dx; • u represents the Young measure limit of the sequence {z n } n∈N in the sense that for all f ∈ C 0 (R) with lim |x|→∞ f (x) = 0 and for all g ∈ C 0 (Ω) it holds °(ε k sum x∈ ∗ Ω∩X k f(u(x)) ∗ g(x) ) = lim n→∞ int Ω f(z n(x)) g(x) dx; Thus, functions of G(Ω) can be used to simultaneously describe very different behaviours of bounded sequences of L p functions; moreover, we believe that the study of functions in G(Ω) will allow to define new notions of standard limits that could be successfully applied to the study of ill-posed problems from functional analysis.

Nonstandard Analysis, Computability Theory, and metastbility
SPEAKER: Sam Sanders

ABSTRACT. We present the surprising connections between computability theory and Nonstandard Analysis initiated in [2]. In particular, we investigate the two following topics and show that they are intimately related via Tao’s notion of metastability ([3]).

  1. (T.1)  A basic property of Cantor space 2N is Heine-Borel compactness: For any open cover of 2N, there is a finite sub-cover. A natural question is: How hard is it to compute such a finite sub-cover? We make this precise by analysing the complexity of functionals that given any g : 2N N, output a finite sequence f0, . . . , fnin 2N such that the neighbourhoods defined from fig(fi) for i n form a cover of Cantor space.

  2. (T.2)  A basic property of Cantor space in Nonstandard Analysis is Abraham Robinson’s nonstandard compactness, i.e. that every binary sequence is ‘infinitely close’ to a standard binary sequence. We analyse the strength of this nonstandard com- pactness property of Cantor space, compared to the other axioms of Nonstandard Analysis and usual mathematics.

Our study of (T.1) yields exotic objects in computability theory, while (T.2) leads to surprising results in Reverse Mathematics. Nonetheless, the functionals from (T.1) arise naturally and directly from slight variations of Tao’s notion of metastability. Fur- thermore, we show that the functionals from (T.1) completely disrupt the elegant ‘Big Five picture’ of Reverse Mathematics, and also destroy the complex structure of the Reverse Mathematics ‘zoo’ ([1]).

[1] Damir Dzhafarov, Reverse Mathematics Zoo,

[2] Dag Normann and Sam Sanders, Computability Theory, Nonstandard Anal- ysis, and their connections, Submitted arXiv:

[3] Terence Tao, Structure and randomness, American Mathematical Society, Providence, RI, 2008, pp. xii+298

16:00-16:40 Session 11D: Contributed talks
Location: E487
On the Kierstead's conjecture
SPEAKER: Maxim Zubkov

ABSTRACT. According to H. Kierstead in paper \cite{Kierstead}, an automorphism $f$ is fairly trivial if for all $x\in L$, there are only finitely many elements between $x$ and $f(x)$. A nontrivial automorphism is called strongly nontrivial, if it is not fairly trivial. H. Kierstead's paper \cite{Kierstead} concluded with 3 conjectures, with the main one as follows.

\textbf{Conjecture.} (H. Kierstead \cite{Kierstead}) Every computable copy of a linear order $\mathcal{L}$ has a strongly nontrivial $\Pi^0_1$ automorphism if and only if $\mathcal {L}$ contains an interval of order type $\eta$.

R. Downey and M. Moses proved that H. Kierstead's conjecture holds for discrete linear orders. C. Harris, K. Lee, S.\,B. Cooper proved that Kierstead's conjecture holds for $\mathcal{L}\cong{\sum\limits_{q\in\mathbb{Q}}}F(q)$, where $F : \mathbb{Q}\rightarrow \mathbb{N}$ is $\textbf{0}'$-limitwise monotonic. We generalize previous result to a bigger class of linear orders, in which $\mathcal{L}$ can contain both finite blocks and infinite blocks simultaneously.

\begin{theorem}[G. Wu, M. Zubkov]\label{small} The Kierstead's conjecture holds for all linear orders $\mathcal{L}$ of the form $\sum\limits_{q\in\mathbb{Q}}F(q)$, where $F : \mathbb{Q}\rightarrow \mathbb{N}\cup\{\zeta\}$ is extended $\textbf{0}'$-limitwise monotonic function. \end{theorem}

The Kierstead's conjecture has been verified for a large class of linear orders. For this reason, it has remained open for so long. The following theorem is a negative solution to Kierstead's conjecture:

\begin{theorem}[K.M. Ng, M. Zubkov] There exists a $0'$-limitwise monotonic relative to the Kleene's Ordinal Notation System $\textit{O}$ function $G:\mathbb{Q}\rightarrow \mathbb{N}\cup\{\zeta\}$ such that the linear order $\mathcal{L}\cong\sum\limits_{q\in\mathbb{Q}}(G(q))$ has no subinterval of type $\eta$ and every computable copy of $\mathcal{L}$ has a strictly nontrivial $\Pi^0_1$-automorphism. \end{theorem}

For computable $\eta$-like linear orders we have the following partial result.

\begin{theorem}[K.M. Ng, M. Zubkov] There exists an $\eta$-like computable linear order $\mathcal{L}$ win no $\eta$ subinterval such that every $\Delta^0_2$ isomorphic to $\mathcal{L}$ computable linear order $\mathcal{L}'$ has a strictly nontrivial $\Pi^0_1$-automorphism. \end{theorem}

\begin{thebibliography}{10} \bibitem{Kierstead}H.\,A. Kierstead, On $\Pi^0_1$-Automorphisms of Recursive Linear Orders, {\it Journal of Symbolic Logic} {\bf 52}(1987), 681-688. \end{thebibliography}

\vspace*{-0.5\baselineskip} \end{document}

Some Weak Yet Strong restrictions of Hindman's Theorem

ABSTRACT. Hindman's Finite Sums Theorem (HT) is a celebrated result in Ramsey's Theory stating that any finite colouring of the positive integers admits an infinite set such that all non-empty finite sums of distinct elements from that set have the same colour.

The strength of HT is known to be between the $\omega$-th Turing Jump and the Halting Set, by seminal results of Blass, Hirst, and Simpson from some thirty years ago. In terms of reverse mathematics, HT is provable from $\ACA_0^+$ and implies $\ACA_0$. Recently Dzhafarov, Jockusch, Solomon and Westrick proved that the restriction of HT to sums of at most 3 distinct terms already implies $\ACA_0$. Yet no upper bound better than $\ACA_0^+$ is known for this restriction. It is indeed an open problem in combinatorics whether a proof of the restriction of HT to sums of at most 2 elements exists that does not establish HT.

We introduce natural restrictions of Hindman's Theorem for which both a non-trivial lower bound and an upper bound much below $\ACA_0^+$ can be established. We call such principles "weak yet strong" restrictions.

First, we introduce an infinite family of restrictions of HT that are equivalent to $\ACA_0$. Second, we introduce a restriction of HT in which monochromaticity is required only for sums of adjacent elements in the solution set and prove it to be between Ramsey's Theorem for pairs and the Increasing Polarized Ramsey's Theorem for pairs of Dzhafarov and Hirst.

Other related results have been obtained in collaboration with Ko{\l}odziejczyk, Lepore and Zdanowski and will not be discussed in this talk.

16:00-17:00 Session 11E: Contributed talks
Location: E497
Brentano and Frege: The tunnel between analytic and continental theories of language with intentional inexistence and the judgement stroke.
SPEAKER: Hannah Berry

ABSTRACT. I would like to address Brentano’s theory of intentionality and Husserl’s development of intentional inexistence. I will use Husserl’s thoughts on noesis and noema (as a direct extraction from Brentano’s intentional object) to highlight the importance of Frege’s judgement stroke and how this logical symbol bridges the gap between analytic and continental philosophy.

Brentano’s theory of intentionality is outlined primarily in the 1874 text. His most referenced paragraph includes:

‘Every mental phenomenon is characterised by what the Scholastics of the Middle Ages called intentional (or mental) inexistence of an object, and what we might call, though not wholly unambiguously, reference to a content, direction towards an object… This intentional inexistence is characteristic exclusively of mental phenomena.’ (1874; 88)

The theory of intentionality portrayed here suggests that it involves two processes. One of these is that we must have a mode of existence of an object, for example: judgement of the existence of the object and certain properties; and simultaneously, our mental act being directed toward an object. The object is not a physical or purely mental object, but rather a refiguration of an object as an intentional object. Brentano’s 1874 text suggests that there are three modes of phenomena: presentation, judgement and emotional phenomena including love and hate.

As part of the theory of intentionality, we make judgements that the object exists in a particular context. Frege includes the human (intentional) act of deciding that making a judgement: the judgement stroke. This intention is included in the Begriffsschrift. This assertion/judgement is necessary for the intentional experience to occur. I would like to address the similarities with Brentano’s intentional inexistence and the need for Frege’s judgement stroke within a phenomenological analysis.

It is important to readdress intentionality with Brentano and Frege in mind, as this could shed light on other aspects of intentionality and develop cohesion in the varying theories of intentionality.

Are points (necessarily) unextended?

ABSTRACT. Ever since Euclid defined a point as that which has no part it has been widely assumed that points are necessarily unextended. It has also been assumed that, analytically speaking, this is equivalent to saying that points or, more properly speaking, degenerate segments--i.e. segments containing a single point--have length zero. In our talk we will challenge these assumptions. We will argue that neither degenerate segments having null lengths nor points satisfying the axioms of Euclidean geometry implies that points lack extension. To make our case, we will provide models of ordinary Euclidean geometry where the points are extended despite the fact that the corresponding degenerate segments have null lengths, as is required by the geometric axioms. The first model will be used to illustrate the fact that points can be quite large--indeed, as large as all of Newtonian space--and the other models will be used to draw attention to other philosophically pregnant mathematical facts that have heretofore been little appreciated.

Why Does Formal Deductive Logic Start With the Classical Greeks?
SPEAKER: Heidi White

ABSTRACT. Many ancient peoples studied logic in the broad sense of argumentation, but the study of formal deductive validity starts with the classical Greeks. For some reason, the only person to invent a study of validity in virtue of form was Aristotle, and all other logicians have had his example to follow. Why?

We contend that formal logic emerged as a result of two factors—one geographical, the other political.

First, unlike other regions of the ancient world, classical Greece had a geography that favored small states, dominated by urban crowds. The ease of navigating the Mediterranean caused the commercial classes to grow, and the small size of these states meant that these same commercial crowds dominated the politics of the classical age. As a result, political questions were settled, not by kings or small groups of nobles, but in mass meetings like the Athenian Assembly. The mechanics of these meetings put special emphasis on public argumentation.

Second, these same crowds, when called to make political decisions, often behaved irrationally. Such crowds had dominated the Athenian Assembly, but when Athens lost its war against Sparta, and then followed with the execution of Socrates, a reaction among intellectuals led to the development of formal logic. Philosophers focused increasingly on the difference between rational argumentation and irrational, and this theme, developed by Plato but later expanded by Aristotle, culminated in the first known system of formal logic.

We attribute the Greek relish for logical demonstration, even in mathematics, to an argumentative political environment, and we draw our argument from our book If A, Then B: How the World Discovered Logic (Columbia University Press).

16:00-17:00 Session 11F: Contributed talks
Location: F299
Determinacy of Boolean combinations of $\Sigma^0_3$ games

ABSTRACT. Welch characterized the ordinal at which winning strategies for all $\Sigma^0_3$ games appear, via $\Sigma_2$ reflection; namely, it is the least ordinal which is the ordinal standard part of a non-standard model which has an infinite nested sequence of pairs of ordinals, the smaller of which is a $\Sigma_2$ substructure of the larger. This reflection property is strictly between $\Sigma_2$ admissibility and $\Sigma_2$ non-projectibility. Montalban and Shore show that this is the beginning of a hierarchy, in that the least ordinal for winning strategies for all games which are alternating differences of $m$-many $\Sigma^0_3$ sets is strictly between the least $m+1$-admissible and $m+1$-non-projectible. Here we show the straightforward generalization of Welch's result, that this ordinal is the least standard part of a model with an infinite nesting of $\Sigma_{m+1}$-elementary pairs. This talk will be an introduction to the subject.

The tree property at $\aleph_{\omega+2}$ with a finite gap

ABSTRACT. \documentclass[a4paper,12pt,titlepage,leqno]{article}


\begin{center} {\bf The tree property at $\aleph_{\omega+2}$ with a finite gap \par}

{\v S}.~Stejskalov{\'a}: Charles University, Department of Logic,\\ Celetn{\' a} 20, Praha 1, 116 42, Czech Republic\\\\ web page:


\textbf{Abstract.} Starting with modest large cardinal assumptions (a hypermeasurable cardinal of a sufficient degree) we construct a model where $\aleph_\omega$ is a strong limit cardinal, the tree property holds at $\aleph_{\omega+2}$, and $2^{\aleph_\omega} = \aleph_{\omega+n}$ for any fixed $2 \le n < \omega$. The proof uses a variant of the Mitchell forcing and is based on a product-style analysis reminiscent of the original construction of Cummings and Foreman in \emph{The tree property}, \emph{Adv.\ in Math.}, 133, 1998. The results are joint with Sy D.\ Friedman and R.\ Honzik. \end{document}

Algebraic new foundations
SPEAKER: Paul Gorbow

ABSTRACT. NF is a set theory obtained by putting a syntactic constraint (stratification) on the comprehension schema; it proves that there is a universal set V. NFU (NF with atoms) is known to be consistent through its close connection with models of conventional set theory that admit automorphisms.

The theory, ML_CAT, in the language of categories is introduced and proved to be equiconsistent to NF (analogous results are obtained for intuitionistic and classical NF with and without atoms). ML_CAT is intended to capture the categorical content of the predicative class theory of NF. NF is interpreted in ML_CAT through the categorical semantics. Thus, the result enables application of category theoretic techniques to meta-mathematical problems about NF-style set theory. For example, an immediate corollary is that NF is equiconsistent to NFU + |V| = |P(V)|. This is already proved by Crabbe, but becomes intuitively obvious in the categorical setting.

Just like a category of classes has a distinguished subcategory of small morphisms, a category modelling ML_CAT has a distinguished subcategory of type-level morphisms. This corresponds to the distinction between sets and proper classes in NF. With this in place, the axiom of power objects familiar from topos theory can be appropriately formulated for NF. It turns out that the subcategory of type-level morphisms contains a topos as a natural subcategory.