Unveiling Biases in Physics: the Case of Higher-Order Equations and the Quest for a Theory of Quantum Gravity
ABSTRACT. During periods of normal science (Kuhn 1962), scientists typically accept the foundational premises of their paradigm uncritically, leaving its methodological assumptions unexplored. Based on their joint work with Fredrik Andersen, Rani Lill Anjum and Elena Rocca (2019, 2024) emphasize the critical importance of identifying and reflecting on philosophical biases to uncover hidden sources of scientific disagreement and foster interdisciplinary understanding.
This talk applies their approach to an ongoing debate in theoretical physics: the formulation of a theory of quantum gravity. We argue that biases within the physics community, particularly those surrounding the use of the Lagrangian formalism, have shaped the development and evaluation of candidate theories in significant ways.
Currently, most fundamental physical laws, including those describing gravity, are expressed through Lagrangian formalism. This framework, among the most widely accepted in physics, is grounded in historical and conceptual traditions that have shaped the field for centuries. Crucially, it constrains fundamental laws to take the form of differential equations of at most the second order in time, a stance justified by Ostrogradsky’s no-go theorem, which demonstrates the instability of general higher-order theories (Swanson 2022). Consequently, physicists often dismiss theories whose order in time is higher than the second, thus considering possible higher-order theories as a priori unviable (Collavini and Ansoldi, under review). This happens even if there are hints suggesting possible higher-order theories, as could be for the quantum gravity framework (Weinberg 1995).
Collavini and Ansoldi, however, challenge the above orthodoxy. They argue that Ostrogradsky’s conclusions depend on assumptions specific to second-order dynamics and do not necessarily apply to broader classes of higher-order formulations. By critiquing the Lagrangian formalism in its higher-order generalization, they propose to seriously reconsider higher-order equations, in the light of symmetry reasoning and incompatible results with alternative approaches to higher-order equations. This fact, in turn, might offer new insights toward a reconciliation between general relativity and quantum mechanics. In doing so, Collavini and Ansoldi’s work exemplifies a willingness to question deeply embedded methodological assumptions, aligning with Anjum and Rocca’s call for philosophical awareness in scientific practice.
This talk argues that the uncritical appeal to Lagrangian formalism is underpinned by specific biases and value judgments. While the Lagrangian approach has proven instrumental in finding fundamental physical quantities like energy or in unifying diverse physical theories, its dominance reflects a rationalist inclination for elegant, parsimonious frameworks which ultimately fueled the quest for a “theory of everything” which would be eventually expressed in Lagrangian terms.
Yet, this rationalist bias subtly influences both the development and evaluation of physical theories, marginalizing alternative approaches and methodologies: the physics community indeed often undervalues possible results derived from non-Lagrangian frameworks, limiting the scope of theoretical progress.
Building on Anjum and Rocca’s insights, we show that Collavini and Ansoldi’s critique represents a paradigmatic case of confronting implicit assumptions. While the broader implications of their proposal remain underexplored, their willingness to challenge conventional wisdom underscores the need for a pluralistic approach to theorizing gravity. By integrating higher-order dynamics into discussions of quantum gravity, physicists may uncover new
pathways for resolving the longstanding tensions between general relativity and quantum mechanics.
This talk also highlights the broader philosophical implications of this critique. It illustrates how reflecting on implicit assumptions can enrich scientific methodology, fostering greater conceptual clarity and inclusivity in evaluating alternative theories. In doing so, it underscores the mutually enriching relationship between philosophy and science. By addressing the philosophical biases embedded in formal tools like the Lagrangian framework, scientists can more effectively navigate the conceptual challenges inherent in quantum gravity research and beyond.
Ultimately, this talk aims to demonstrate how methodological and philosophical critique can not only deepen theoretical understanding but also drive practical advancements in scientific inquiry, illustrating the value of philosophical reflection for the formulation of a theory of quantum gravity.
Conceptions of Emergence in Quantum Gravity: Challenges and Perspectives
ABSTRACT. Quantum gravity (QG) refers to a family of research programmes that aim to elucidate physics at high-energy regimes, where both quantum and relativistic effects are expected to become relevant. Theories of QG are intended to complement and ultimately reduce to general relativity (GR) within overlapping, low-energy regimes. The expected breakdown of the general relativistic framework at high energy suggests that spacetime may become ill-defined in such contexts ("disappearance of spacetime"). Consequently, numerous theories of QG postulate non-spatiotemporal degrees of freedom as fundamental structures for describing phenomena near or beyond the Planck length. This is in sharp contrast with the success of GR at low energy, which instead relies on spatiotemporal degrees of freedom. The contrast motivates the search for a reconciliation of the two perspectives, raising the "problem of the emergence of spacetime" (EST): How can spacetime emerge from non-spatiotemporal degrees of freedom at the appropriate scale?
The literature acknowledges that the concept of emergence may not be necessary for all theories of QG. However, its introduction often serves to fulfil specific metatheoretical roles, particularly in ensuring the correspondence and reduction of the theory to GR. Discussions concerning emergence are grounded in various conceptions, including strong versus weak and hierarchical versus flat emergence. These conceptions aim to provide the necessary conceptual framework to understand how spacetime can emerge from non-spatiotemporal degrees of freedom. Two prominent proposals are the effective theory and the functionalist approaches. Both offer valuable insights into how spacetime can emerge. Nevertheless, general examinations of the notion of emergence tend to be too broad to address the EST in particular contexts. Furthermore, particular approaches face challenges related to the extension of their applicability across various domains.
In this presentation, I content that understanding the EST requires a more detailed examination of how spacetime is recovered from underlying degrees of freedom in theories of QG. This necessitates a clearer conception of emergence, but also the identification of specific approaches (or families of mechanisms) for this emergence.
To prove this point, I first define the metatheoretical role of emergence with respect to reduction in QG. I then offer a precise reconstruction of the problem of the EST in light of the disappearance of spatiotemporal degrees of freedom. Afterwards, I critically examine various conceptions of emergence that claim to address the problem.
I argue that specifying the appropriate conception of emergence is a prerequisite for solving the EST. This identification is constrained by both the correspondence principle and the relation of relative fundamentality between QG and GR. Moreover, I emphasise that any chosen conception align with further specification of appropriate approaches to emergence within specific theories of QG. In particular, candidate approaches will depend on the level of non-spatiotemporality introduced to describe the quantum-gravitational regime.
In conclusion, I concur with other scholars (e.g. Crowther 2013) that suggest "informal applications" of the effective theory approach hold promise in addressing the EST. By addressing these issues, this paper intends to contribute to the broader discourse on how theories of QG can reconcile the disappearance and emergence of spacetime, offering a critical perspective on the challenges and potential solutions in the quest for a theory of QG.
References:
Crowther, Karen (2013), "Emergent spacetime according to effective field theory: From top-down and bottom-up", Studies in History and Philosophy of Modern Physics 44(3): 321-328.
On the Role of Denotation in Contemporary Theories of Representation
ABSTRACT. In contemporary accounts of scientific representation across the board, denotation has been taken to have an important role. It is posited by many philosophers of different strands of thought that denotation somehow establishes the aboutness of a representation (Hendry and Psillos 2007, Greenberg 2013, and Contessa 2007, just to mention a few). That is, to know what an object (or part of the object) represents is to know what the object (or part of it) denotes. In fact, Frigg and Nguyen (2017; 2020), Elgin (2010), R.I.G. Hughes (1997) draw upon Goodman’s (1968) work and elevate the status of denotation to the level of “core of representation.”
Despite the agreement on denotation having an important role in scientific representation, little has been said about what determines the denotation relation. Among those who comment on the requirements of the denotation relation, Elgin and Goodman consider it a matter of stipulation and fiat. Frigg and Nguyen (henceforth, FN) do not defend a clear view on this matter and relegate the issue to the philosophy of language. They claim that mainstream accounts of denotation in the philosophy of language, including the direct reference account and the descriptivist account, which were mainly about proper names, can readily be deployed to decide what represents what.
In this presentation, I will make a case that, contrary to FN’s view, a version of the descriptivist theory of reference in the philosophy of language does not fit with DEKI. One of DEKI’s tenets is that denotation alone can tell us the aboutness of a representation without recourse to other elements of DEKI. I will argue, though, based on some version of the descriptivist theory, that the first element of DEKI (denotation) becomes dependent on other elements of the account. I will extend the conclusion to all denotation-based accounts that share the gist of the mentioned tenet. Additionally, according to the descriptivist theory of denotation, DEKI (or any denotation-based account for that matter) has to be an account of accurate representation while DEKI is not so. Moreover, those theories that appear to be compatible with denotation-based accounts (including DEKI) have problems of a different nature. Namely, there are some examples of representation for which those latter theories are not valid. In the examples I will, those theories are unable to get the target right. This latter problem, again, extends beyond DEKI and encompasses any account that views denotation as the means of target fixation in representation.
The constructive part of this essay is where I attempt to amend DEKI account of representation. The amended DEKI does not presuppose that only one part of the account is responsible for answering what represents what. Rather, the amended account in its entirety is responsible for such a task. This way, we will avoid the incompatibility mentioned above. Moreover, the counterexamples will be blunt against the amended DEKI. I will suggest that this amendment could be, in principle, applied to other denotation-based accounts too.
The moral I would like to draw at the end of the present discussion is that the question “What does an object represent (in a particular instance)?” cannot be divorced from the question “How does the object represent (in the particular instance)”?.
ABSTRACT. It has been a matter of great contention how one might demarcate what counts as part of the logical vocabulary. In my talk I want to combine three key thoughts into a new way of demarcating the logical vocabulary. These are the following: that logic should be topic-neutral; that the topic neutrality of logic is best captured through the subject matter transparency of the logical vocabulary; and that some expressions containing non-logical vocabulary may vary in subject matter across modal space.
Theories that make heavy use of the notion of subject matter have seen a resurgence (Hawke, 2017; Yalcin, 2016; Fine 2020; Plebani and Spolaore, 2020, 2023) in the last decade, since the publication of Yablo’s (2014) Aboutness. Inspired by the work of philosophers such as Belnap (1976) or Lewis (1988), many have identified topics of sentences as questions under discussion in a given conversational context. In the context of a propositional language, it has further been argued that topics display mereological structure and further that the extensional connectives do not contribute to the subject matter of the sentences they feature in. So, for instance, the topic of a conjunction contains the topics of its conjuncts as a part and is nothing more than their fusion.
Some (Hawke, 2017; Plebani and Spolaore, 2020; Badura, 2021) including myself, have attempted to extend the thought that the logical vocabulary is subject matter transparent to the first order case. This raises some interesting questions about the behaviour of the quantifiers, including how they function in a modal setting with variable domains. If I say “All cats are cute” in the actual world, or in a possible world with a wholly different cat population, intuitively I have said something different about different individuals. This raises the natural thought, which Fine (1980) and Ferguson (2023) have modelled before, of introducing world-relativization in subject matter assignments. One can then capture the thought that sentences like “All cats are cute” vary in subject matter across modal space.
On the back of such world-relativization, we may also define a world-invariant notion of subject matter, by simply fusing the subject matter that an expression gets assigned at each world. With this notion in mind, one can present a plausible account of how to demarcate the logical vocabulary even for modal formulae. For instance, suppose we want to rule in the S5 necessity, ☐, as part of the logical vocabulary. Then by the subject matter transparency criterium, we would need to have that the subject matter of ☐A is just the subject matter of A. For world-relative subject matters this is a non-starter: the subject matter of ☐A in a given world will include how things are in all the other worlds (in S5), whereas the subject matter of A only concerns how things are at the given world. Considering their world-invariant subject matters, however, it is indeed the case that the subject matter of ☐A is the same as the subject matter of A. For weaker modalities than S5, one might yet formulate a weaker criterium still. The subject matter of ☐A will only concern what goes on at the R-accessible worlds, and we could say that such modalities are part of the logical vocabulary as the invariant subject matter of ☐A at the R-accessible worlds is the same as the invariant subject matter of A at those same worlds.
Something must have gone awry, you might think: after all, we are free to describe whatever accessibility relation we may please, and even, say, the most prominent epistemic and deontic logicians resist ruling deontic and epistemic operators as part of the logical vocabulary. I agree with the spirit of these concerns, but I don’t think the criterium is wrongheaded: it just needs to be complemented with ideas present in the invariantist project for defining the logical vocabulary. In the cases of metaphysical, epistemic and deontic modalities (bar, perhaps, limit cases such as a priori knowability), in defining the accessibility relation one has to appeal to facts concerning the identity of objects in the model. For instance, on a standard reading, the epistemic accessibility relation tells us what is compatible with the total evidence of a particular epistemic agent. But this will depend on facts about the agent’s cognitive system, along with various information on their context such as their spaciotemporal position, and their prior stock of knowledge. No such information is needed to define other accessibility relations, that are characterized simply in terms of how they relate worlds: for instance, whether they’re serial, symmetrical, etc. So even the modality D may be part of the logical vocabulary, as long as we don’t add to it the interpretation that it represents a deontic modality.
Why Centaur is Not a Unified Model of Human Cognition
ABSTRACT. The quest for a unified model of human cognition is one of the most ambitious challenges in AI research. Centaur has been introduced as the first serious candidate for such a model (Binz et al., 2024). If this claim were valid, it would demonstrate the feasibility of building general-domain models of human cognition through a data-driven approach. Moreover, a further development of this research program could potentially address one of the fundamental challenges in AI research. Centaur was developed by fine-tuning LLama 3.1 70B, a Transformer-based model, using the Quantum Low-Rank Adaptation (QLoRA) technique. The training process relied on Psych101, a large-scale dataset comprising 160 psychological experiments from the literature, involving 60,092 participants and yielding 10,681,650 data points on decision-making, memory, and learning. Despite the display of promising results, Centaur's viability as a unified model of human cognition is undermined by several theoretical and methodological inconsistencies. The first limitation concerns its functionalist nature: as a Transformer-based architecture, it fundamentally differs from the structure of the human brain, making direct comparisons problematic in the absence of a robust theoretical justification (Lieto, 2021). Centaur’s candidacy is primarily based on its performance against the 10% of Psych101 data withheld from training. In these tests, Centaur outperformed both the baseline model and 14 alternative cognitive models, achieving higher accuracy in predicting human behavior even in tasks featuring different cover stories, experimental structures, and theoretical domains (Jansen et al., 2021). Additionally, its internal representations appear more aligned with human cognitive patterns than the baseline model, as suggested by fMRI analyses of participants. In the context of artificial neural networks, an internal representation refers to the distribution of neural activations across the model's layers, providing a temporary snapshot of its computational state. However, these conclusions raise several concerns. The metric used to assess the model’s goodness-of-fit, pseudo-R², is widely criticized in the literature, and the claim that Centaur generalizes to "entirely new domains" appears to be more of an extrapolation of the training dataset rather than a genuine extension to novel contexts (King, 1990; Hagquist & Stenbeck, 1990; Reisinger, 1997; Figueiredo et al., 2011; Cohen et al., 2013; Hemmert et al., 2018; Jorgensen & Williams, 2020). Furthermore, the comparison with fMRI data is theoretically problematic and relies on philosophically debatable assumptions. Finally, the authors evaluate Centaur’s candidacy as a unified model of human cognition through Newell’s test, a subjective and controversial criterion, which Centaur nonetheless fails to meet (Anderson, 2004). This contribution will critically examine these issues, highlighting the limitations of Centaur and demonstrating why, despite its relevance in the field of LLMs, it cannot be considered a viable candidate for a unified model of human cognition.
References
Anderson JR, Lebiere C. (2003). The Newell Test for a theory of cognition. Behavioral and Brain Sciences. 26(5):587-601. doi:10.1017/S0140525X0300013X
Binz M, Akata E, et al. (2024). Centaur: a foundation model of human cognition. arXiv preprint. Available from: https://arxiv.org/abs/2410.20268.
Figueiredo Filho, D. B., Silva Júnior, J. A., & Rocha, E. C. (2011). What is R2 all about?. Leviathan (São Paulo), 3, 60-68.
Hagquist, C., Stenbeck, M. (1998). Goodness of Fit in Regression Analysis – R 2 and G 2 Reconsidered. Quality & Quantity 32, 229–245. https://doi.org/10.1023/A:1004328601205
Hemmert, G. A. J., Schons, L. M., Wieseke, J., & Schimmelpfennig, H. (2018). Log-likelihood-based Pseudo-R2 in Logistic Regression: Deriving Sample-sensitive Benchmarks. Sociological Methods & Research, 47(3), 507-531. https://doi.org/10.1177/0049124116638107
Jansen, R.A., Rafferty, A.N. & Griffiths, T.L. (2021). A rational model of the Dunning–Kruger effect supports insensitivity to evidence in low performers. Nat Hum Behav 5, 756–763.
King G. (1990). Stochastic Variation: A Comment on Lewis-Beck and Skalaban’s “The R-Squared.” Political Analysis. 2:185-200. doi:10.1093/pan/2.1.185
Lieto, A. (2021). Cognitive Design for Artificial Minds. Routledge.
Reisinger, H (1997), "The impact of research designs on R2 in linear regression models: an exploratory meta-analysis", Journal of Empirical Generalisations in Marketing Science, Vol. 2, No. 1
Rethinking the design stance: a cognitivist perspective on the attribution of mind to robots and AI systems
ABSTRACT. The attribution of mental states to robots and AI systems has been explored through various frameworks, including Theory of Mind (ToM) and Dennett’s intentional systems theory (1971). A large number of studies aim to investigate whether people attribute intentionality to robots and AI systems, and under what conditions this phenomenon occurs. Empirical studies in HRI have typically considered the design stance (in Dennett’s framework) as a non-mentalistic stance, where explanations and predictions of the behavior of a system would not either implicitly or explicitly refer to the system's mind. This paper aims to develop the notion of design stance in a perspective that partially departs from the traditional view adopted by empirical studies in HRI. It will be argued that there are at least two different ways to outline the design stance: one non-mentalistic, the other, identified as the folk-cognitivist stance, which implies the attribution of a mental structure or mechanism to the system. In the framework proposed, theoretical models of the robots’ mind may be based on a theoretical vocabulary which does not rely on the notions of beliefs, desires, propositional attitudes and rationality and is more in line with that of cognitive science, taking the form of a functional decomposition of the system into modules that process representations. These claims will be substantiated by examples from ongoing empirical research.
ABSTRACT. Deflationism about truth encompasses a set of philosophical standpoints that can be concisely summarized by the following two claims: (1) truth is a light concept, and (2) its raison d’être is its logico-linguistic function.
In formal philosophy, truth is usually investigated through axiomatic theories. To this end, to a suitable base theory is extended with a new unary predicate, alongside axioms governing its use. A canonical example of such a theory is CT⁻, which comprises the axioms of Peano Arithmetic (PA) together with axioms stating that the truth predicate behaves compositionally on arithmetical sentences.
In this context, the first deflationist claim is typically formulated as the requirement that a deflationary theory of truth should be conservative over its base theory. That is, for every formula in the base theory's language, if the deflationary theory proves it, then the base theory must also prove it. This formulation gives rise to the so-called conservativeness argument against deflationism. Proponents of this argument contend that a correct theory of truth should satisfy conditions that make it nonconservative, such as including the compositional axioms of CT⁻ alongside all instances of induction for the language of truth.
Hartry Field’s standard rejoinder to this argument asserts that non-conservativity arises from the combination of axioms essential to truth (such as compositionality) and those that are fundamentally mathematical and tied to the specific choice of a base theory (such as induction for arbitrary predicates). Although intuitively compelling, Field’s response to the conservativeness argument has been challenged by recent advancements in axiomatic truth theories.
A significant result, the Many Faces Theorem (MFT), demonstrates that over CT⁻, several principles—including Δ₀-induction for the language of truth, the global reflection principle for PA, and disjunctive correctness axioms—are equivalent. Moreover, all these principles are non-conservative over PA and are arithmetically equivalent to ω-iterated uniform reflection over PA.
The general reflection principle for PA intuitively asserts that all theorems of PA, which form the base theory of CT⁻, are true. The disjunctive correctness axiom generalizes the standard compositional axiom for disjunction, asserting that arbitrarily long finite disjunctions are true if and only if at least one of the disjuncts is true. Another form of this axiom states only the existence of a true disjunct in a true disjunction, making it prima facie weaker than the full disjunctive correctness principle.
MFT demonstrates that, within CT⁻, properties that are essentially truth-theoretic (such as disjunctive correctness) and those that are mathematical (such as Δ₀-induction) are indistinguishable. Thus, as intuitive as Field’s distinction may seem, it does not provide a way for deflationists to escape the conservativity argument. The addition of an essentially truth-theoretic principle of disjunctive correctness, or even a weaker form of it, suffices to render the compositional theory of truth non-conservative.
This paper argues that the inability to distinguish between genuinely mathematical and genuinely truth-theoretical principles is a peculiarity of CT⁻ rather than a feature of all compositional theories of truth. In particular, it is shown that a prominent compositional theory of self-referential truth, KF, can distinguish between disjunctive correctness, the global reflection principle for PA, and Δ₀-induction. Moreover, KF, together with the truth-theoretic principle of disjunctive correctness, remains a conservative extension of PA, and these three principles differ in their proof-theoretic strength.
Our analysis of the proof-theoretic strength of these principles in the context of another self-referential theory of truth—the Friedman-Sheard theory (FS⁻)—suggests that the conflation of mathematical and truth-theoretic principles, as well as their non-conservativity, may be linked to the classical nature of the truth predicate. Over FS⁻, Δ₀-induction for the language of truth and disjunctive correctness are equivalent, just as they are in CT⁻, though both are stronger than the global reflection principle for PA. If this diagnosis is correct, proponents of conservative deflationism must confront a dilemma: either reject the full classicality of the truth theory or abandon Field’s intuitive distinction between mathematical and truth-theoretic principles.
Can truth be defined? On a recent attempt by Rumfitt and an attempt at a correction
ABSTRACT. We introduce HOPKF (Higher-order Partial Kripke Feferman), a system of higher-order partial logic with a definition of truth that enables us to derive all axioms of PKF [1], a first-order axiomatic theory of truth. The derivation is inspired by a recent attempt found by Rumfitt [2]. Here we will argue that Rumfitt’s attempt fails, but that HOPKF can solve some of the problems of his derivation. We show that HOPKF is a sound higher-order system with an adequate, self-applicable truth predicate and examine the recovery of PKF’s axioms within it.
Tarski famously showed that it is impossible to give a definition of truth for a language in the same language and suggested that we must give a definition of truth for a language in a more comprehensive one [3]. Then it will never be the case that a language contains its own truth predicate.
Tarski’s definition of truth did not satisfy those who wanted to construct a semantics for languages with a self-applicable truth predicate. And philosophically, the definitional approach to truth has been frowned upon, as it attempts to explain truth with more basic concepts, which may, however, be unclear (e.g. correspondence) and subject to technical and fatal limitations, such as those just explained.
Consequently, by the 1980s, focus shifted to axiomatic truth theories, which treat truth as a primitive and analyze its behavior without relying on definability. More recently, however, truth definitions have regained interest due to the revitalisation of higher-order quantification in philosophy. Early motivations for HO definitions of truth appear in works of Kalderon [4] and Soames [5], but Künne [6] was the first to explicitly connect higher-order truth definitions to Tarski’s work.
Even more recently, Rumfitt attempted to derive PKF within a system of higher-order partial logic and the following definition of truth
(ParT) T (⌜A⌝) ⇔ ∃P Says(⌜A⌝, P )/∀Q(Says(⌜A⌝, Q) → Q).
Where ’/’ is the partial logic transplication operator (where P/Q is true iff P and Q are true, false if P is true and Q is false, undefined otherwise), ⌜A⌝ is a code of an eternal sentence (a sentence whose truth value does not depend on the context in which it is uttered), and Says(⌜A⌝, P ) indicates that the eternal sentence ⌜A⌝ expresses proposition P . The intended interpretation, according to Rumfitt, is that a sentence is true iff that sentence is meaningful (it expresses something), and whatever proposition it expresses is the case.
We show how Rumfitt's derivation fails. To derive the axioms of PKF, Rumfitt proposes axioms that regulate the Says(·, ·) predicate, including
(Conj) Says(⌜A ∧ B⌝, P ∧ Q) ⇔ Says(⌜A⌝, P ) ∧ Says(⌜B⌝, Q)
However, Rumfitt also accepts the substitution of equivalent formulae even within the scope of the Says(·, ·) predicate [2, p. 170]. From this we can show that the system collapses to one in which there is only one expressible proposition. Let A and B be two sentences that express a unique proposition, e.g. Says(⌜A⌝, P ) ∧ Says(⌜B⌝, Q). Then, by (Conj),
Says(⌜A ∧ B⌝, P ∧ Q).
Since P ∧ Q and Q ∧ P are equivalent in partial logic, we obtain
Says(⌜A ∧ B⌝, Q ∧ P ).
By (Conj) we get
Says(⌜A⌝, Q) ∧ Says(⌜B⌝, P )
From this we can conclude that Says(⌜A⌝, Q) and Says(⌜B⌝, P ). Since by assumption A expressed only one proposition, P , this means that P and Q must be the same. This easily entails that there is only one expressible proposition.
Beyond this collapse, Rumfitt’s truth predicate is not fully self-applicable, as it excludes higher-order inputs. Crucially, this undermines the appeal of adopting a Kripkean theory of truth. Furthermore, we show that incorporating the eighth axiom of PKF, which states that only sentences can be true, makes Rumfitt’s system unsound.
To avoid collapse and ensure a self-applicable, formally adequate truth predicate, we introduce HOPKF, a PKF-like system with a Kripkean fixed-point semantics that incorporates higher-order vocabulary while restricting higher-order quantifiers to ⊤ and ⊥. We prove that the semantics satisfies [[T r(t)]] = ⊤ iff [[∃P Says(t, P )/∀Q(Says(t, Q) → Q)]] = ⊤, and likewise, [[T r(t)]] = ⊥ iff [[∃P Says(t, P )/∀Q(Says(t, Q) → Q)]] = ⊥. We also examine the soundness of PKF’s original axioms within this framework.
Finally, we’ll explore venues for future work.
References
[1] V. Halbach and L. Horsten, The Journal of Symbolic Logic, vol. 71, no. 2, pp. 677–712, 2006.
[2] I. Rumfitt, in Oxford Studies in Philosophy of Language, vol. I, OUP, 2019, pp. 148–177.
[3] A. Tarski, Studia philosophica Commentarii Societatis Philosophicae Polonorum, vol. 1, pp. 261–405, 1936.
[4] M. E. Kalderon, “The transparency of truth,” Mind, vol. 106, no. 423, pp. 475–497, 1997.
[5] S. Soames, Understanding truth, 1999.
[6] W. Künne, Conceptions of truth. Clarendon Press, 2003.
ABSTRACT. Anscombe ([1957] 2000) famously argued that knowledge of our intentional actions cannot arise from observation. As she stated: if Z is an intentional action, “it is not by observation that one knows one is doing Z […] in so far as one is observing […] that Z is actually taking place, one’s knowledge is not the knowledge that a man has of his intentional actions” (50). In this paper, we challenge this view by arguing that it lacks empirical support.
First, we clarify Anscombe’s view by considering the role of proprioception in neurologically healthy individuals. Proprioception, as the capacity to represent (González-Grandón et al. 2021; Longo et al. 2010) and self-ascribe (Brewer 1995; Evans 1982; Serrahima 2024) bodily movements, appears well-suited to underpin non-observational knowledge of intentional actions under normal circumstances (Balslev et al. 2007; Farrer et al. 2002; Fourneret et al. 2003; Schwenkler 2019).
Next, we examine the case of patients with deafferentation, who lack proprioceptive capacities due to nerve damage. Contrary to Anscombe’s view, we argue that such individuals can know observationally what they do when acting intentionally (Evans et al. 2015; Verdejo 2023). Empirical evidence suggests that deafferented subjects rely on visual feedback to perform intentional movements of their affected body parts (Blouin et al. 1993; Cole and Paillard 1995; Ghez et al. 1990, 1995; Renault et al. 2018; Wong 2018). We argue that this reliance on visual information allows them to form true, reliable beliefs about their actions, satisfying the conditions for practical knowledge.
By means of an empirically grounded counterexample, our account challenges Anscombe’s claim that practical knowledge of intentional actions is necessarily non-observational.
Intentionality in Social Cognition: Evidence for an Enactive Understanding
ABSTRACT. Since the foundation of cognitive science, intentionality has been conceived according to Brentano’s view as the defining feature of mental states to be directed toward something, to be about an object or a state of the world (Jacob 2003). This notion of intentionality has played a crucial role in shaping the study of cognition, including in the domain of social cognition, where it is considered foundational to the mechanisms underlying how we understand and predict the actions of others (Frith & Frith 2012).
A central framework within social cognition studies is the Theory of Mind (ToM; Baron-Cohen 1995), which interprets understanding others as a cognitive process involving the inference of mental states that underlie and explain observable behavior. Following the Brentanian tradition, ToM assumes that mental states are internal representations that must be inferred from external cues. Consequently, this perspective conceptualizes social cognition as a primarily inferential process occurring privately within the subject’s hidden mind (see Gallagher 2020).
This framework faces significant challenges, particularly considering empirical evidence showing that intentionality is not attributed solely to human agents but also to non-human entities, including animated characters and robots (Jacob 1997; Ziemke et al. 2015). Within ToM, such instances are typically explained by distinguishing between original and derived intentionality (Dennett 1987; Searle 1980): non-human agents are not considered intentional in their own right but are instead thought to acquire a secondary form of intentionality. This mechanism is also considered to form the basis of anthropomorphism (Hortensius et al. 2021).
While this explanatory model remains dominant in cognitive science, it is not without its limitations. I propose that research in Human-Robot Interaction (HRI) indicates that the attribution of intentionality to robots does not always align with ToM-based assumptions. Experimental studies investigating whether robots are perceived as intentional agents have revealed inconsistencies in the way intentionality is assessed and conceptualized. I suggest that these inconsistencies arise from deeper epistemological tensions within the ToM framework, which simultaneously relies on computational models of cognition and folk psychological interpretations of agency (see Haugeland 1990). As a result, experimental methodologies often incorporate incompatible assumptions, leading to conceptual and methodological deadlocks.
In this talk, I propose an alternative account of intentionality, drawing on the enactive approach to cognition (Varela et al. 1991). Enactivism redefines intentionality as an emergent feature of embodied and situated engagement with the environment (Varela 1991; Gallagher 2017). Applied to the study of intentionality in social cognition, this framework shifts the focus from a model of attributing intentionality to a model of detecting intentionality as a property that emerges through interaction.
I will show that the enactive reconceptualization of intentionality offers several advantages. First, it provides a more robust framework for interpreting empirical findings, particularly those related to anthropomorphism (Damiano & Dumouchel 2018). Rather than assuming that anthropomorphism is simply a projection of human cognitive structures onto non-human entities, an enactive account recognizes that attributions of intentionality emerge from the dynamic interactions between humans and non-human agents. Second, it resolves the epistemological contradictions inherent in ToM-based approaches by moving beyond the rigid dichotomy between original and derived intentionality. The ToM framework, by treating intentionality as an individual property that is either inherent or transferred, remains tied to solipsistic assumptions about cognition. By contrast, an enactive account challenges these assumptions, offering a model of intentionality that is fundamentally relational and co-constituted in interaction through engagement with the world.
Baron-Cohen, S. (1995). Mindblindness: An essay on autism and theory of mind. MIT press.
Damiano, L., & Dumouchel, P. (2018). Anthropomorphism in human–robot co-evolution. Frontiers in psychology, 9, 468.
Dennett, D. C. (1987). The intentional stance. MIT Press
Frith, C. D., & Frith, U. (2012). Mechanisms of social cognition. Annual review of psychology, 63(1), 287-313.
Gallagher, S. (2017). Enactivist interventions: Rethinking the mind. Oxford University Press.
Gallagher, S. (2020). Action and interaction. Oxford University Press.
Haugeland, J. (1990). The intentionality all-stars. Philosophical perspectives, 4, 383-427.
Hortensius, R., Kent, M., Darda, K. M., Jastrzab, L., Koldewyn, K., Ramsey, R., & Cross, E. S. (2021). Exploring the relationship between anthropomorphism and theory‐of‐mind in brain and behaviour. Human brain mapping, 42(13), 4224-4241.
Jacob, P. (2003). Intentionality. In The Stanford Encyclopedia of Philosophy (Fall 2003 Edition).
Jacob, P. (1997). What minds can do: Intentionality in a non-intentional world. Cambridge University Press.
Searle, J. R. (1980). The intentionality of intention and action. Cognitive science, 4(1), 47-70.
Varela, F. J. (1992). Autopoiesis and a biology of intentionality. In Proceedings of the Workshop Autopoiesis and Perception, edited by B. McMullin, 4–14. Dublin City University.
Varela, F. J., Thompson, E., & Rosch, E. (1991) The Embodied Mind. MIT Press
Ziemke, T., Thill, S., & Vernon, D. (2015). Embodiment is a double-edged sword in human-robot interaction: Ascribed vs. intrinsic intentionality. In Proceedings of the workshop on cognition: A bridge between robotics and interaction (pp. 9-10).
Emergence and qualitative novelty. The case of chimericity
ABSTRACT. Emergent phenomena depend on lower-level goings-on while maintaining some autonomy and exhibiting some form of novelty. However, while there is a general agreement on the importance of these three features—dependence, autonomy, and novelty—their precise meaning remains unclear.
In this paper, we focus on novelty, which has consistently been interpreted in causal terms in the current debate. Many contemporary authors propose that a property is ontologically novel when it possesses a novel—or a novel set of—causal powers (Wilson 2021). Within this framework, alongside dependence, the presence of powers not possessed by the emergence base is recognized as the hallmark of ontological emergence.
However, the causal interpretation of emergent novelty raises several issues. First, admitting the existence of emergent causal powers generates the exclusion problem and challenges the principles of physical causal closure and non-overdetermination. Second, the first authors to formulate the concept of emergence (the so-called “British Emergentists”) did not attach as much importance to causal novelty as to other forms of novelty. According to many of them, emergent phenomena manifest novel properties that are qualitatively unlike or heterogeneous compared to those of their emergence base. Many contemporary philosophers adopt similar criteria (Crowther 2016; Linnemann & Visser 2008; Chalmers 2006), and the same applies to complexity scientists, who associate emergence with novel patterns and structures (see Krakauer 2024). In short, equating ontologically emergent novelty with the manifestation of novel causal properties excludes the possibility that other properties—e.g., qualitative, semantic, or structural ones—could be relevant to ontological emergence. Given that these properties are frequently mentioned and seem to play pivotal roles in emergence, we propose to integrate the causal criterion for emergent novelty with a “qualitative” one.
Most emergent phenomena described in the literature can be associated with new causal roles, but they are not always considered emergent in virtue of these causal properties. A closer analysis reveals that their causal efficacy often results from other emergent features, such as a new “wholeness” or, alternatively, new geometries, structures, or types of degrees of freedom. Building on the debate about the nature of properties, and following authors such as Martin, Molnar, Heil, Ellis, and Azzano, we suggest that these non-essentially-causal features should be regarded as “categorical” or “qualitative” properties—i.e., properties with a certain nature that is not intrinsically nomological and that confer causal roles to their bearers only contingently. We argue that these properties are better candidates for emergent properties than causal powers and exhibit qualitative novelty as well as causal novelty. To support this claim, beyond referencing several examples of qualitative emergence already present in the literature, we focus on a relatively new case study: chimericity.
As pointed out by Di Bona (2022), chimericity is a real, perceivable “grouping property” of harmonies and melodies. Its name derives from the word “chimaera”, which Di Bona uses to describe “an auditory compound that does not belong to any single sound source, [...] the result of a combination of auditory fragments deriving from different sound sources” (256). We argue that chimericity is a categorical emergent property and a strong example of qualitative novelty. By enabling the perception of a “unified whole,” chimericity confers “numerical novelty” to a multiplicity of percepts that resulti in one emergent harmony or melody. Furthermore, during the hearing experience, it is possible to shift attention and recognize the individual parts composing the chimaera, so while they ultimately form a unified whole, they do not permanently lose their numerical and qualitative discreteness. We then show that chimeric experiences represent a case of flat, diachronic emergence: they are percepts emerging from a multitude of other percepts, with the diachronic dimension playing a fundamental role in their emergence, particularly in melodic chimaeras. However, since every sound has a certain duration and audible perceptual experiences are individuated by temporal features, harmonic chimaeras also have a diachronic nature.
Finally, we note that our analysis of chimericity raises further important questions about a phenomenon that has been largely overlooked in the literature on emergence—namely, emergent perceptual properties.
References
Chalmers, D. (2006). Strong and weak emergence. In P. Clayton e P. Davies (2006), The Re-emergence of emergence (244-256). OUP.
Crowther, K. (2016). Effective Spacetime. Springer.
Di Bona, E. (2022). Hearing chimeras. Synthese, 200(3), 257.
Linnemann, N.S.,Visser, M.R. (2018). Hints towards the emergent nature of gravity. Stud. Hist. Philos. Sci. B., 64, 1-13.
Krakauer, D. (eds), Foundational Papers in Complexity Science, Vols. 1-4. SFI Press.
Wilson, J. (2021). Metaphysical Emergence. OUP.
Consequences and impossibilities: William of Sherwood’s and Peter of Spain’s logic
ABSTRACT. In this paper, I pursue a twofold goal: analytically reconstructing two major 13th-century accounts of logical-metaphysical impossibility and assessing their impact on contemporary logical works. Specifically, I examine the contributions of William of Sherwood (c.1200–c.1272) and Peter of Spain (c.1210–c.1277) to the logic of impossibility through their theories of consequences. Studying 13th-century approaches to impossibility necessarily involves their views on consequences and metaphysical modalities. My primary objective is to reconstruct and systematize William of Sherwood’s and Peter of Spain’s theories on impossible antecedents in conditionals (viz., counterpossibles) and their approaches to the ex impossibili sequitur quodlibet paradox, which the secondary literature has only partially examined.
Firstly, I argue that William of Sherwood presents an incoherent approach to impossibility, focusing on the temporal distinction between “impossibility per se” and “impossibility per accidens”. Additionally, I suggest aligning his position with modern proponents of vacuous truth, as he accepts that anything can follow from an impossible antecedent. I propose rethinking Uckelman’s (2008) formalization of William of Sherwood’s per se and per accidens necessity by linking it with the idea of temporal instants as possible worlds, rather than with the medieval modalities’ ‘statistical paradigm’ (cf. Knuuttila 1993).
In contrast, Peter of Spain consistently focuses on the topical relationship between entities in counterpossibles. I argue that Peter of Spain emphasizes the absence of metaphysical connections and anticipates contemporary opponents of vacuity by rejecting the notion that logical consequences can emerge from impossible antecedents. I interpret the metaphysical nature of his theory of consequences through the lens of a higher version of Masterman’s serious actualism (2024). Lastly, I discuss how Peter of Spain employs necessary relations in true consequences, arguing that these relations function as a form of truthmaker.
Rather than offering a purely historical analysis, I endorse a systematization approach, providing a structured examination of William of Sherwood’s and Peter of Spain’s contributions to the logic of impossibility. Through my methodological choices and comparative analysis, I aim to shed new light on both positions and enhance our understanding of their medieval logical theories, which have been incompletely considered in secondary literature so far. I situate this study within the contemporary discourse on the history of logic and formal reconstructions of theories of impossibility. Therefore, I follow the principal guidelines for the analysis of medieval logic provided by Binini (2024), Ciola (2020), Martin (2018), Parsons (2014), Read (2012), Spruyt (1993), and Uckelman (2008, 2020). By way of conclusion, I argue that systematizing logical-historical themes and arguments offers a pathway to the best possible explanation and constitutes a meaningful explanatory contribution from a philosophical point of view.
Selected references
Primary Sources
Peter of Spain. (1992). Syncategoreumata, edited by L. M. de Rijk and translated by J. Spruyt. Studien und Texte zur Geistesgeschichte des Mittelalters, Bd. 30. Leiden: Brill.
Peter of Spain. (2014). Summaries of Logic, translated by. B. P. Copenhaver, C. Normore and T. Parsons. Cambridge: Cambridge University Press.
William of Sherwood. (1966). Introduction to Logic, edited by N. Kretzmann. University of Minnesota.
William of Sherwood. (1968). Syncategoremata, edited by N. Kretzmann. University of Minnesota.
Secondary Sources
Berto, F., and Mark, J. (2019). Impossible Worlds. Oxford: Oxford University Press.
Binini, I. (2024). Reasoning from the impossible: Early medieval views on conditionals and counterpossibles, Inquiry: 1-24.
Ciola, G.S. (2020). Hic sunt chimaerae? On absolutely impossible significates and referents in mid-14th-century nominalist logic, Recherches de Théologie et Philosophie Médiévales, 87 (2), 441-467.
Fine, K. (2021). Constructing the impossible. In Lee Walters & John Hawthorne (eds.), Conditionals, Paradox, and Probability: Themes from the Philosophy of Dorothy Edgington. Oxford, England: Oxford University press.
Knuuttila, S. (1993), Modalities in Medieval Philosophy, London: Routledge.
Martin, C. J. (2018). The Theory of Natural Consequence, Vivarium 56 (3–4): 340–366.
Masterman, C. J. (2024). Serious Actualism and Nonexistence. Australasian Journal of Philosophy (3):658-674.
Nolan, D. (2024). Counterpossibles, Consequence and Context. Inquiry: An Interdisciplinary Journal of Philosophy.
Parsons, T. (2014). Articulating Medieval Logic. Oxford, England: Oxford University Press.
Read, S. (2012). The Medieval Theory of Consequence, Synthese 187 (3):899-912.
Spruyt, J. (1993). Thirteenth-century positions on the rule ‘ex impossibili sequitur quidlibet’, In Jacobi 1993, 161–193.
Tanaka, K., and Sandgren, A. (2024). The Many Faces of Impossibility, Cambridge: Cambridge University Press.
Uckelman, S. L. (2008). Three 13th-century views of quantified modal logic, In Marcus Kracht, Maarten de Rijke, Heinrich Wansing, & Michael Zakharyaschev (eds.), Advances in Modal Logic. CSLI Publications: 389-406.
Uckelman, S. L. (2020). William of Sherwood on Necessity and Contingency, In N. Olivetti, R. Verbrugge, S. Negri, & G. Sandu (Eds.), Advances in modal logic (1-16).
Williamson, T. (2017). Counterpossibles in Semantics and Metaphysics, Argumenta 2 (2): 195–226.
The Indeterminacy of Analysis: the case of Frege’s Natural Numbers
ABSTRACT. The debate on conceptual analysis in analytic philosophy – revitalized by the recent interest in conceptual engineering – offers a wide array of modes of analysis. These range from a notion of reconstructive analysis of the true meaning of a term, to Carnap’s explication (Carnap 1947/1956, Carnap 1950) – where fruitfulness matters over similarity –, all the way up to an eliminative conception where no similarity is required and original analysandum is discarded (Morris 2020). Modes of analysis are crucial in determining the philosophical scope of an analysis. For example, a reconstructive analysis may carry metaphysical implications, whereas explication entails conceptual pluralism, pragmatism, and, thus, an anti-metaphysical stance.
These would thus represent a useful framework to evaluate philosophical proposals, given that we can trace a philosophical proposal back to its methodological stance on analysis. In this context, I take Frege’s analysis of the notion of natural number developed in Frege 1884 and Frege 1893/1903 as a case study. Frege did not explicitly his methodological approach on analysis. His stance can only be inferred from the features of the analysis itself. The most telling features would thus be Frege’s criteria for a correct analysis of natural numbers. Such criteria may be reconstructed from his criticisms of rival theories, and from his general goals. In fact, certain analyses seem better suited to certain purposes than others. For instance, a reconstructive analysis might be directed at preserving the elements of the richness of the original everyday concept, while an eliminative analysis would prioritize functionality, intra-theoretical motives, and exclude irrelevant features
I argue that Frege’s adequacy criteria for the notion of natural number alone – 1) it must account for pure arithmetic; 2) it must account for applied arithmetic; 3) it must be transparent to reason (logicism) – cannot tell us about the mode under which his analysis is carried out. These criteria are compatible with multiple modes of analysis. In fact, various scholars have attached different modes of analysis to Frege’s proposal (Carnap 1950, Dummett 1991, Picardi 1988, Reck 2007, Lavers 2013, Boddy & May 2020, just to name a few). This leads to two possible conclusions: either Frege’s analysis must be regarded as indeterminate or additional adequacy criteria must be introduced – for instance that numbers are introduced as numbers (Reck 2007, 45). I conclude that, however, no matter how many additional or stricter criteria one adds, Frege’s analysis can still be interpreted under different modes.
References
Boddy, R., & May, R. (2020). Frege on Reference. In The Routledge Handbook of Linguistic Reference (pp. 30-40). Routledge.
Carnap, Rudolf (1947/1956). Meaning and necessity. Chicago University of Chicago Press.
Carnap, Rudolf (1950). Logical foundations of probability. Chicago University of Chicago Press.
Dummett, Michael (1991). ‘Frege and the Paradox of Analysis’. Frege and other Philosophers. Oxford: Oxford UP.
Frege, Gottlob (1884) Die Grundlagen der Arithmetik, Koebner: Breslau; English trans., The Foundations of Arithmetic, ed. and trans. by J.L. Austin, Chicago, IL: Northwestern University Press, 1950.
Frege (1893/1903) Grundgesetze der Arithmetik, Vols 1–2, Jena: Pohle; reprinted Hil- desheim: Olms, 1998; English trans. in The Basic Laws of Arithmetic, ed. and trans. by M. Furth, Berkeley, CA: University of California Press, 1964.
Lavers, G. (2013). Frege, Carnap, and explication: ‘Our concern here is to arrive at a concept of number usable for the purpose of science’. History and Philosophy of Logic, 34(3), 225-241.
Picardi, Eva. (1988) ‘Frege on Definition and Logical Proof’. Temi e Prospettive della Logica e della Filosofia della Scienza Contemporanee. Vol. 1. Eds. C. Cellucci and G. Sambin. Bologna: Cooperativa Libraria Universitaria Editrice Bologna. Pp. 227–30.
Reck, Erich H. (2007). Frege-Russell numbers: analysis or explication? In Michael Beaney (ed.), The Analytic Turn. Routledge. pp. 33-50.
Theorems and tautologies: mathematics reduced to units of measurement
ABSTRACT. The logicist attempt to ground mathematics by reducing arithmetic to logic contrasted with Kant's view of mathematics as a synthetic a priori science. According to logicism, mathematical propositions are analytic, true only by virtue of the meaning of their terms. Wittgenstein took up this view in the Tractatus, arguing that logical propositions are tautologies, i.e. non-informative. However, Hans Hahn criticised logicism, arguing that reducing mathematics to tautology does not justify its application to the natural sciences. This paper examines how Wittgenstein, in his critique of logicism, recovers the Kantian conception of mathematics. By analysing examples from algebra and Euclidean geometry, I will illustrate Wittgenstein's thesis that mathematical propositions are universal, necessary and a priori, but not tautological, since they extend the meaning of concepts and provide rules of inference applicable to experience, thus justifying their effectiveness in the natural sciences.