SILFS PG CONFERENCE 2025: SILFS POSTGRADUATE CONFERENCE 2025
PROGRAM FOR THURSDAY, JUNE 12TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:15-10:15 Session 6: Keynote: Federico Laudisa
09:15
From locality to local realism (and back): on the vexed history of the Bell theorem
10:45-11:55 Session 7A: Parallel Session: Philosophy of Artificial Intelligence (II)
10:45
Algorithmic Epistemic Fairness: Towards a Formalization of Epistemic Injustice in Modelling Opinion Dynamics

ABSTRACT. Algorithmic fairness is an expanding field, dealing with a range of injustice and discrimination issues associated with algorithmic processes. This is especially true for Machine Learning (ML), where fairness metrics are proposed to measure and mitigate biases in algorithmic contexts such as healthcare, education, hiring, criminal justice. While algorithmic fairness is mainly analyzed from an ethical perspective, for instance in terms of values (Binns, 2018; Seng Ah Lee et al., 2021; Tsamados et al., 2021), the epistemic one, that is the one concerning issues related to knowledge transmission and validation, is crucial as well. This is particularly evident in the context of formal approaches to social networks, where mathematical models are used to represent the dynamics of social interactions. Indeed, how information spread throughout a social network is a matter of the epistemic dimension of the agents. And epistemic discrimination may arise in the way in which someone may be systematically excluded from the information sharing due to reasons related to agents’ identity. This kind of discrimination pertains to epistemic injustice, as conceptualized by Miranda Fricker (Fricker, 2007): some individuals are treated unfairly regarding their ability to access and contribute to knowledge production, due to their social identity. I will provide a formalization of epistemic injustice focusing on its testimonial pattern, that is, when someone is downgraded in their credibility due to sensitive features, which lies at the heart of Fricker’s theory. This allows both to model the epistemic dimension of social networks, and to extend the notion of algorithmic fairness in an epistemic direction, which are not systematically considered in the literature. I claim that adding an epistemic dimension to the algorithmic fairness debate is crucial to recognize and consequently address a new kind of harm caused by algorithmic systems shaping the behavior of agents in social networks. Specifically, algorithms that work with information dissemination in social networks (e.g., to model information spread, maximize influence, analyze and predict information flow patterns in social networks) can unjustly assess individuals as less credible due to their belonging to certain social groups. This can lead to an automatized and systematic exclusion in decision making processes where algorithms are involved. The proposed approach can be translated in different algorithmic contexts. I will focus on an example concerning opinion dynamics. In particular, the diffusion of sustainable mobility within a population. I will show how considering the epistemic dimension of the network allows to address a specific type of discrimination, namely the exclusion of some agents from their epistemic connections due to their own social features. The proposed analysis can then serve to intervene on how fairness is evaluated in a context in which social networks described by algorithms are used to simulate social behaviors and supporting decision-making strategies for policy interventions. The proposed approach is in line with a long standing tradition in the philosophy of science that uses formal models as heuristic tools for describing social situations which dynamically evolve over time (O’Connor, 2019), using them as means of representation to reveal and clarify underlying concepts (Mayo-Wilson & Zollman, 2021; Suppes, 1993). While weighted models simulating agents’ behaviors are well-established in the technical literature, the novelty of the contribution lies in the interpretation of this kind of models through the lens of epistemic injustice. The proposed model is part of a larger work devoted at the integration of philosophy and engineering - merging theoretical analysis with technical tools - in designing formal models for exploring different policy strategies to promote the adoption of sustainable mobility solutions.

Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. In Conference on fairness, accountability and transparency (pp. 149-159). PMLR. Michelle Seng Ah Lee, Luciano Floridi, and Jatinder Singh. Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics. AI and Ethics, 1(4):529–544, 2021. Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria Taddeo, and Luciano Floridi. The ethics of algorithms: key problems and solutions. Ethics, governance, and policies in artificial intelligence, pages 97– 123, 2021. Miranda Fricker. Epistemic injustice: Power and the ethics of knowing. OUP Oxford, 2007. Alexandros C. Nikolaidis. A third conception of epistemic injustice. Studies in Philosophy and Education, 40(4):381–398, 2021. Gaile Pohlhaus. Varieties of epistemic injustice 1. In The Routledge handbook of epistemic injustice, pages 13–26. Routledge, 2017. Cailin O’Connor. The origins of unfairness: Social categories and cultural evolution. Oxford University Press, USA, 2019. Conor Mayo-Wilson and Kevin Zollman. The computational philosophy: Simulation as a core philosophical method. Synthese, 199, 03, 2021. Patrick Suppes. The role of formal methods in the philosophy of science. Models and Methods in the Philosophy of Science: Selected Essays, pages 3–14, 1993.

11:20
Over-reliance in AI-assisted decision making: a Bayesian analysis

ABSTRACT. Today, machine learning models are increasingly integrated into expert decision-making across various domains, such as medical diagnosis and legal reasoning. The high accuracy and speed of these models make them valuable tools for augmenting human judgment, and they are often seen as a means to mitigate cognitive biases and inconsistencies in expert reasoning. However, the introduction of machine learning into high-stakes decision-making also raises pressing epistemological, ethical, and social concerns, particularly regarding the nature and reliability of algorithmically assisted reasoning. In response, interdisciplinary research within the human-centered paradigm is developing regulatory and design frameworks to ensure that machine learning is applied in a way that is both effective and equitable.

A central issue in this context is how interaction with algorithmic systems influences human reasoning, especially under conditions of uncertainty and high stakes. The history of automation in expert decision-making suggests that such interactions do not simply replace or enhance human capabilities in a straightforward manner but can lead to systematic and often unpredictable mistakes. One particularly troubling phenomenon is over-reliance: the tendency of human experts to defer excessively to algorithmic recommendations, sometimes resulting in erroneous or unjustifiable decisions. While human-computer interaction (HCI) research has explored technical interventions to mitigate over-reliance, the phenomenon itself lacks a precise conceptualization, and its cognitive underpinnings remain insufficiently understood.

This paper approaches over-reliance from a cognitive and epistemological perspective, seeking to clarify its nature, underlying mechanisms, and possible strategies for mitigation. First, we overview the multidisciplinary discussion of over-reliance by contrasting related concepts such as complacency (from human factors engineering) and automation bias (from social psychology). We also examine recent proposals advocating for cognitive forcing techniques as an alternative to conventional AI support mechanisms. Second, we propose an analysis of over-reliance as a systematic deviation from Bayesian reasoning, taken here as a normative model of rational inference. Specifically, we argue that over-reliance can be understood as a variant of the base rate fallacy, a well-documented cognitive bias in probabilistic reasoning. Third, we consider the broader implications of this analysis for contemporary debates on AI-assisted reasoning and decision-making. In particular, we show that over-reliance persists even when common countermeasures—such as enhancing model accuracy and transparency—are introduced, and that increasing accuracy may paradoxically exacerbate the problem rather than alleviate it.

Finally, we explore the consequences of our results for the philosophy of artificial intelligence in general. The main upshot of our discussion is that the interaction between humans and artificial systems cannot be fully addressed through technological fixes alone but requires a deeper understanding of the normative and descriptive dimensions of human reasoning. We conclude by discussing how insights from cognitive science and philosophy of science can inform the design, training, and evaluation of AI systems, ultimately shaping their role in expert judgment and decision-making under uncertainty.

10:45-11:55 Session 7B: Parallel Session: Philosophy of Mathematics
10:45
Unfolding Unfolding - A Constructivist Approach to Potentialism

ABSTRACT. The question ’Given certain mathematical concepts, what else should we accept once we have accepted them?’ guided Feferman throughout much of his work. It culminated in his unfolding program, [Fef96; FS00] aiming to study the gradual expansion of schematic formal systems. In recent years, potentialism, the idea that some or all aspects of mathematics are inherently potential, has become a highly discussed issue. We propose to combine the two topics. We believe that analyzing the implicit commitments modally will result in a finer-grained picture and that our constructive approach to potentialism will address some issues. The central idea is to examine the unfolding of a schematic system supplemented with a modal operator relying on tools developed in the potentialist literature.

Potentialism in mathematics addresses philosophical questions in the foundations of mathematics, mainly, regarding the height and width of the cumulative hierarchy [Lin13; Hel89]. While philosophically fruitful there are some issues. First, the intended interpretation of the modality is not obvious and has generated a lively discussion, see [Lin18; Uzq15]. Secondly, and related to the first point, quantification over all accessible worlds requires further clarification. Third, potentialists are forced to move to a stronger metatheory.

We propose to take Feferman & Strahm’s non-finitist arithmetic (NFA), a schematic version of Peano Arithmetic, as a first case study and investigate its modal unfolding. The axioms of NFA are the axioms for 0, Sc, and Pd and induction as well as the substitution rule A(P)/A(B) with P being a free predicate variable and A and B are any formulas available in the unfolding process. Adding further axioms results in different unfolding systems of NFA, such as the operational unfolding U0(NFA) and the full unfolding U(NFA), see [FS00].

At the base world, we start with U0(NFA) and let the theory stepwise unfold. We move from one world to an accessible world by simulating a modalized reformulation of Π10-Comprehension. Let A be Π10, and HA(Y, α) be the claim that Y is the extension generated from iterating A along the provable well-order α.

Following [Buc05], we define the Π10-CA-theories: Definition 1. • Π10−CA≺α := ACA0 +WO(≺ α) + ∀β ≺ α∃Y HA(Y, β) • Π10−CA≺∗,0 := ACA0 + ∀α(WO(α) → ∃Y HA(Y, α))

Our first suggestion is to simulate Π10-Comprehension in a modal context via the principle: □(WO(α) → ♢∃Y (HA(Y, α))). (1)

We rely on the following results: Theorem 1 (Feferman & Strahm, Buchholz). 1. U0(NFA) ≡ PA ≡ ACA0 2. U(NFA) ≡ RA<Γ0 ≡ Π10−CA≺∗,0

The equivalences in Theorem 1 allow us to investigate the implicit commitments of arithmetic in a modal framework. We start at the base world w0 where we take arithmetic, formulated schematically, as given. At w0 provable well-orderings up to (but not including) ϵ0 are actual. By eq. (1), the result of applying arithmetical operators along well-orderings of length ϵ0 exists potentially at w0, and actually at w1. Having these sets in w1 significantly increases the amount of provable well-orderings in w1, which, allows us to apply the arithmetical operator A along these new well-orderings. This leads to a succession of possible worlds, each of which extends the length of provable well-orderings of all preceding worlds. The limit of this iteration yields a world that validates Π10−CA≺α for all α < Γ0, i.e. Π10−CA≺∗,0 which is proof-theoretically equivalent to U(NFA).

We believe our approach will shed a brighter light on the implicit commitments of arithmetic because the modal unfolding allows us to ”zoom in” closer to the unfolding of a schematic theory. Secondly, we aim to reach the full predicative unfolding of arithmetic (up to Γ0) in countably many steps. We believe further that our approach addresses the issues mentioned above regarding potentialism. Our approach offers a straightforward interpretation of the invoked modality, i.e., a possible to-construct modality. The stepwise closure of the operations does not require us to expand our conceptual repertoire, natural numbers are enough for us and, lastly, we are not beholden to accept a stronger meta-theory.

References [Buc05] Wilfried Buchholz. “Prädikative Beweistheorie.” Unpublished Lecture Notes. 2005. [Fef96] Solomon Feferman. “Gödel’s program for new axioms: Why, where, how and what.” In: Gödel 96 (1996), pp. 3–22. [FS00] Solomon Feferman and Thomas Strahm. “The unfolding of non-finitist arithmetic.” In: Annals of Pure and Applied Logic 104.1 (2000), pp. 75–96. [Hel89] Geoffrey Hellman. Mathematics without numbers: Towards a modal-structural interpretation. Clarendon Press, 1989. [Lin13] Øystein Linnebo. “The potential hierarchy of sets.” In: The Review of Symbolic Logic 6.2 (2013), pp. 205–228. [Lin18] Øystein Linnebo. Thin objects: An abstractionist account. Oxford University Press, 2018. [LS23] Øystein Linnebo and Stewart Shapiro. “Predicativism as a Form of Potentialism.” In: The Review of Symbolic Logic 16.1 (2023), pp. 1–32. [Uzq15] Gabriel Uzquiano. “Varieties of indefinite extensibility.” In: Notre Dame Journal of Formal Logic 56(1) (2015), pp. 147–166.

11:20
On Cantor's domain Principle and the Semantics of Potentialism

ABSTRACT. Short Abstract: (for the longer and more detailed one see the pdf attached) This paper aims to present an objection to Potentialist theories using Cantor Domain principle. We argue that if the model-theoretic semantics of potentialist theories are taken to be the intended ones, there is a commitment to a complete extensionally determinate totality of all sets and hence, to Actualism. Hence if this is the case, there is no reason for the potentialist to reject an actualist model and thus there is a shift in the modal logic of Potentialism from S4.2 to S4.2M. Finally we proceed to present some alternative strategies that can be used by the potentialist to avoid this objection by considering homophonic truth clauses and semantic pessimism.

12:05-13:15 Session 8A: Parallel Session: Philosophy of Social Science
12:05
Causal-Mechanistic Reasoning in Evidence-Based Policy

ABSTRACT. Evidence-based policy is an approach to policymaking according to which policies should be developed on the basis of the best available evidence, as this increases the chances that these policies will be effective (i.e. that they will produce the intended outcome). ‘Effectiveness’ is, de facto, a causational issue and, as a result, evidence of causation plays a central role in EBP. When possible to conduct, randomised controlled trials and meta-analyses are considered as the best methods to gather evidence of causation. However, it is often the case that, due to several reasons (e.g. ethical obstacles), it is not possible to employ these methods. If so, my claim is that policymakers should focus on causal mechanisms and mechanistic evidence. Indeed, developing policies on the basis of causal mechanisms – i.e. employing causal-mechanistic reasoning (CMR) for policy development – is beneficial with regard to improve policy effectiveness. I will argue in favour of this claim by referring to real policies in the educational field, namely the Tennessee STAR Project and the California Class Size Reduction (CSR). While the former turned out to be effective, the latter failed at producing the intended outcome. I will explain the reasons why CSR failed, as well as why employing CMR while developing the policy would have prevented it to fail. I will conclude that employing CMR provides better causal explanations and causal predictions. Doing so, employing CMR improves causal inferences when developing a policy and, ultimately, it increases the chances that a policy will be effective.

12:40
The Epistemology of Policy Advice: Insights from the Italian Case.

ABSTRACT. My talk will explore the epistemological aspects of the Italian policy advice system. The phenomenon of policy advice represents a crucial aspect of governance. It can be defined as the process of providing information, analysis, and recommendations to policymakers to assist in decision-making (Craft & Howlett 2013, Galanti 2017). It involves synthesizing evidence and presenting options that are both technically feasible and politically sensitive, often drawing from expertise in specific fields and understanding the broader policy context. In Europe, its importance has grown in recent years as governments have faced increasingly complex challenges, such as the complexity of public policies (e.g., public policies originating from the European Union must be tailored to fit national legal and political systems), the complexity of problems to be addressed (e.g., wicked problems like climate change), crisis conjunctures (e.g., pandemics, wars, and special economic plans), and the crisis of representation (e.g., not all politicians have a strong bureaucratic-institutional background). The first step in developing my epistemological account will be to identify and distinguish the key features of different systems of policy advice. Research on policy advice has traditionally differentiated between countries (e.g., UK and Germany) where this activity is institutionalized and codified, and countries (e.g., Italy and Spain) where it is more implicit and informal, though not necessarily inferior in terms of quality or quantity (Galanti 2017). It will be shown that the need for expert advice transcends political divisions, with governments of all orientations displaying continuity in both the quantity of advisors and, occasionally, the individuals involved. This will be achieved by providing data covering two recent Italian governments with contrasting political orientations: the Conte II government (2019-2021) and the Draghi government (2021-2022). More specifically, I will analyze the characteristics of the policy advisors within these two governments from both a quantitative perspective – focusing on the number of advisors and their tasks – and a qualitative perspective, examining their profiles in terms of age, gender, education, and professional background. This approach frames the Italian policy advice system as a distinctive epistemic environment, characterized by both positive traits—such as flexibility and dynamism, largely stemming from its non-institutional structure—and negative aspects, including the transience of its actors and a potential conservatism, as the data will show. Afterward, I will outline the philosophically significant aspects of this epistemic environment and its actors. Key issues to be addressed include challenges arising from an overly closed and self-referential environment (Levy 2021), possible biases linked to the actors’ sociographic characteristics (Kahan 2013, Levy 2021), potential epistemic injustices arising from the exclusion of certain groups from the advice process (Fricker 2007), and the unique relationship of trust between advice seekers and providers (Baghramian & Croce 2021). I will then offer an analysis of expertise that effectively captures the role of the policy advisor. To achieve this, I will compare the two main conceptions of expertise in the literature in relation to the characteristics of Italian policy advisors. These conceptions are the objectivist view, which defines expertise based on knowledge (Goldman 2001), and the functional-relational view, which emphasizes credentials and social roles (Croce 2019; Goldman 2018) Using data from interviews with Italian policy advisors, I will argue that neither perspective effectively describes the role of experts as policy advisors. This is because the figure of the expert as policy advisor defies clear definition due to its particular 'dual role' as both expert and facilitator, balancing technical knowledge with an understanding of political-institutional dynamics. Finally, I will evaluate the insights emerging from this empirical case study on the Italian policy advice system. These findings will be valuable not only for other policy advice systems, particularly those similar to the Italian one, but also for enriching the philosophical debate on expertise and its use by policymakers. References Baghramian, M., & Croce, M. (2021). Experts, public policy, and the question of trust. In J. de Ridder & M. Hannon (Eds.), The Routledge Handbook of Political Epistemology. Routledge. Craft, J., & Howlett, M. (2013). The dual dynamics of policy advisory systems: The impact of externalization and politicization on policy advice. Policy and Society, 32(3), 187–197. Croce, M. (2019). On what it takes to be an expert. The Philosophical Quarterly, 69(274), 1–21. Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press. Galanti, M. T. (2017). Policy advice and public policy: Actors, contents and processes. Rivista Italiana di Politiche Pubbliche, 2, 249–272. Goldman, A. (2001). Experts: Which ones should you trust? Philosophy and Phenomenological Research, 63(1), 85–110. Goldman, A. (2018). Expertise. Topoi, 37(1), 3–10. Kahan, D. M. (2013). Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision Making, 8(4), 407–424. Levy, N. (2021). Bad beliefs: Why they happen to good people. Oxford University Press.

12:05-13:15 Session 8B: Parallel Session: Philosophy of Physics (I)
12:05
Making Sense of Gravitational Thermodynamics

ABSTRACT. Over the last decades, thermodynamic and statistical mechanical descriptions have been applied to various non-standard physical contexts. One important case is the physics of self-gravitating systems (SGS), which are widespread in astrophysics. The application of statistical methods is crucial: while theoretically possible, tracking the evolution of SGS like star clusters by calculating the trajectories of millions of bodies simultaneously is practically impossible. However, the precise extent to which statistical mechanics and thermodynamics genuinely apply to such unconventional systems is a highly contentious matter.

This talk offers new insights into this foundational yet understudied area of physics, that is ripe for philosophical work. Clarifying this topic is interesting per se but also an ideal starting point to rethink crucial concepts like equilibrium, the role of idealisations in astrophysical models and how to de-idealise them, and more generally how and to what extent thermodynamical descriptions apply in contexts where crucial features of the theory break down. This case study will also motivate us to advance a novel way to understand thermodynamics with possible applications to topics like dark matter and black hole thermodynamics.

In the context of SGS, the challenges arise due to the peculiar long-range nature of gravity in these systems. The gravitational potential distinguishes them from more conventional short-range interacting systems. It entails unconventional thermodynamic and statistical mechanical behaviour, especially non-extensivity of energy and entropy, negative heat capacity, and lack of standard equilibrium.

In the limited debate on the issue, some have maintained that thermodynamics could still be suitable to describe these systems, provided we revise certain thermodynamic features that are ascribed to conventional systems. In contrast, others have claimed that thermodynamics is unfit to model these systems and that only the application of non-equilibrium statistical mechanics is supported.

This talk advances our understanding of gravitational physics in two ways. First, we argue that equilibrium statistical mechanics can be meaningfully applied to SGS in the appropriate regime, alongside non-equilibrium statistical mechanics. This is supported by the fact that equilibrium can be found in these systems in the form of metastable quasi-equilibrium states at certain scales, and by the idea of the scale relativity of physical theories and ontology. We prove this point within idealised models and then within more realistic models of globular clusters, applying de-idealisations according to Earman’s principle.

Second, although full-blown phenomenological thermodynamics is unsuitable in this domain, we develop what we call a ‘minimal framework for thermodynamics’ and show how a notion of thermodynamics applies to SGS. In fact, we can provide thermodynamic explanations based on the behaviour of macro-level quantities like temperature describing these systems within the domain of equilibrium statistical mechanics. While non-equilibrium and equilibrium statistical mechanics and phenomenological thermodynamics are distinct theories, minimal thermodynamics is a coarse-grained level of description within the framework of equilibrium statistical mechanics. It qualifies as ‘thermodynamics’ especially in virtue of its use of macroscopic coarse-grained quantities partially autonomous from the microscopic variables characterising purely statistical descriptions. We maintain that the picture we develop is the best way to make sense of the notion of gravitational thermodynamics.

As anticipated, this talk has two overall goals. First, we provide new conceptual foundations for the specific study of gravitational thermodynamics. We draw a map of statistical and thermal physics and elucidate how they apply to the physics of SGS. In particular, we show how phenomena like core collapse in globular clusters can be predicted and explained starting both from statistical mechanical and coarser-grained thermodynamic points of view. Despite the unconventional features of these systems, these different methodologies are all equally justified at the right scale, while there are natural trade-offs between more complex but richer statistical mechanical explanations and simpler but more limited thermodynamic explanations of the same phenomena. Maintaining that only non-equilibrium statistical mechanics applies to these systems misses crucial aspects of SGS physics which fall within equilibrium statistical mechanics and (minimal) thermodynamics. Moreover, the analysis provided has potential applications to other topics like the physics of dark matter halos.

Second, we analyse SGS to draw general lessons about thermodynamics and statistical mechanic. The case study supports a more liberal approach to concepts such as equilibrium, brings out considerations on the role of unconventional properties like negative heat capacity in thermodynamics and statistical physics, and prompts the development of a novel minimal framework for thermodynamics that accounts for thermodynamic descriptions in between the purely statistical and phenomenological thermodynamics level. These results have a double outcome: (a) we develop a new useful notion of thermodynamics beyond phenomenological thermodynamics with further possible applications, possibly including black hole thermodynamics; (b) we improve our understanding of equilibrium statistical mechanics, as we show how we can effectively apply it to these unconventional systems if only we take a less stringent approach to features like equilibrium.

12:40
Not-so-Absolute Cosmic Simultaneity

ABSTRACT. Cosmic simultaneity is the proposal that we can reconcile absolute simultaneity with relativity by means of the cosmic time function definable in certain highly symmetric cosmological models. Scholars have pointed out a heterogeneous array of obstacles in taking the route of cosmic simultaneity (Bourne 2004, pp. 114–116; 2006, p. 199, Wüthrich 2010; 2013, p. 17, Smeenk 2013, p. 15, Callender 2017, pp. 75–78, Callender and McCoy 2021, p. 4). In this paper, I follow this trend by highlighting a new serious problem which has been overlooked in the literature. My claim is that, once the relevant notion of absoluteness is clarified, an appealing approach to cosmic simultaneity turns out to be inconsistent. When this issue is considered alongside the already existing problems, the project of cosmic simultaneity reveals unsuccessful. My overall argument proceeds as follows. First, I clarify the relevant notion of absoluteness at stake in efforts to recover absolute simultaneity within relativistic physics. This will require a brief excursus on the reasons why absolute simultaneity may be desired in the first place. I argue for what I call the Causal Connection Condition (CCC): the fact that two distinct events are in a certain temporal relation (e.g. simultaneity) is absolute only if it obtains independently from the fact that any distinct causally disconnected event ex occurs. I proceed to introduce the notion of cosmic simultaneity, distinguishing between the epistemological and metaphysical approaches. According to the epistemological approach, the fact that two events are absolutely simultaneous is discoverable only if some physical properties are uniquely held by spacelike hypersurfaces in a certain foliation. However, this is not a necessary condition for the obtaining of absolute simultaneity itself. On the metaphysical approach, instead, for some events to be absolutely simultaneous, it is necessary that the spacetime points representing them belong to a privileged hypersurface. I suggest that some existing problems with cosmic simultaneity might be addressed by adopting the metaphysical approach. However, I show that, on the metaphysical approach, absolute simultaneity depends on causally disconnected events. Given the CCC, I conclude that metaphysical cosmic simultaneity is inconsistent.

14:45-16:30 Session 9A: Parallel Session: Philosophy of Medicine and Life Sciences
14:45
Imagining Evolutionary Possible Worlds: How Life’s Alternative Histories Shape Evolutionary Science

ABSTRACT. Evolutionary biology, as a historical science, relies on fragmentary, limited, and biased evidence. Of the estimated 50 billion species that have existed, over 99% are extinct, and fossil evidence exists for only an estimated ~0.0004% of all species that have existed (Mora et al., 2011). Even this fraction is skewed, as fossilization disproportionately preserves hard-bodied, marine, and abundant species, while soft-bodied, rare, and terrestrial organisms are disproportionately absent. These estimations raise an important question: how can a science based on such a meager, biased and fragmentary evidence sample be justified as a rigorous discipline? Lewontin, for instance, warned that we must “give up the childish notion that everything that is interesting about nature can be understood... Evolution is a form of history, and history simply does not leave sufficient traces… Tough luck” (Lewontin, 1998, p. 130). Despite these challenges, evolutionary biology has not collapsed under the weight of evidential limitations. Instead, it continues to develop a successful range of methodological strategies and inferential techniques to extract as much information as possible from the traces that remain—something that has recently raised optimism, especially among philosophers. Recent work in the philosophy of science has moved beyond lamenting the limitations of historical sciences to examining the strategies that make evolutionary reconstructions possible and productive (Cleland, 2002; Sober, 1991; Turner, 2007). Within this debate, one of the most salient findings is the role of modal concepts, possibility and actuality, in evolutionary explanations. Although these explanations ultimately aim to account for how events actually unfolded, they frequently begin by exploring what could possibly have happened given known constraints and evolutionary mechanisms. This was first brought to attention by Brandon (1990) who introduced the well-known distinction between how-possibly explanations (HPE) and how-actually explanations (HAE). In his view, these explanations exist along a continuum, where hypotheses initially framed as HPE may transition into HAE with sufficient evidence. Since its inception, philosophers have debated this analytical distinction, focusing on two key questions. First, what characterizes HPEs, and are they fundamentally different types of explanation or merely stages in a scientific process? (Pearson, 2018; Resnik, 1991). Second, do HPEs qualify as explanations at all? Some argue they are merely heuristic tools that guide research and hypothesis generation but fail to meet explanatory adequacy criteria (Reydon, 2012). While these discussions have provided valuable theoretical tools and insights into the epistemology of evolutionary reconstruction, in my talk at SILFS, I will argue that a third, distinct form of explanation exists that is not adequately captured by the current philosophical debate on evolutionary explanations. Unlike HPEs, which outline tentative evolutionary scenarios as steps toward a HAE, this alternative approach does not seek to approximate actual history. Instead, it systematically explores the full spectrum of biological possibilities as a domain of inquiry in its own right. I refer to these as evolutionary possible worlds (EPWs), drawing from Lewis’s concept of possible worlds in modal logic. In this talk, I will attempt to bridge modal epistemology with evolutionary explanation by introducing a formal framework for evolutionary EPWs. Formalizing EPWs requires defining each world in terms of three key components: initial conditions (C), which correspond to the phenotypes present, environmental factors, and genetic variation; evolutionary processes (P), which can refer to natural selection regimes and developmental constraints; and contingent events (E), such as ecological shifts. Within this framework, a trait is considered necessary if it evolves in all possible evolutionary worlds, whereas it is merely possible if it emerges in at least one. They key distinctive factor of EPWs is that they are not constructed to generate competing hypotheses about actual evolutionary history (as HPEs do), but to map the space of conceivable evolutionary pathways. This approach has direct implications for applied evolutionary biology, including fields such as synthetic biology, agriculture, and medicine (Koskinen, 2017). In these fields, evolutionary possible worlds act as conceptual tools for designing and testing novel biological systems. That is, not to explain the actual world, but to create or manipulate something new. For example, in synthetic biology, EPWs are routinely built to identify viable ways for engineering new metabolic functions, optimizing gene regulatory networks for processing drugs, or even designing artificial organic matter with traits that have never appeared in natural evolutionary history. Similarly, in medicine, EPWs can be used to model and project pandemic trends and behaviors worldwide, which are used to control outbreaks. In summary, I will argue that recognizing EPWs as a distinct form of explanation enriches evolutionary theory and its applications. Rather than reconstructing actual history, EPWs explore biological potentialities through modal logic, refining our understanding of contingency and necessity while extending evolutionary reasoning into practical fields. By bridging modal epistemology with evolutionary explanation, EPWs demonstrate that evolutionary explanation is not just about the past but also about life’s future.

15:20
Precision Medicine and Wearable Medical Devices: Promises and Dangers

ABSTRACT. One of the most striking advancements of contemporary medicine is Precision Medicine, an approach to pharmacotherapy that takes into account genetic, environmental, and lifestyle factors to tailor clinical intervention to the specific needs of a given patient. At least on paper. What is nowadays known as Precision Medicine has been, and still is, often called by many names, ranging from Personalized Medicine, to Stratified Medicine, Genomic Medicine, and others. In much of current medical research, all these names are treated as denoting the same concept. However, for many authors (Roden and Tyndale 2013; Ali-Khan et al. 2016; Phillips 2020) the presence and variation of these various names mark not just a rhetorical difference, but a conceptual one as well: some core tenets of what constitutes Precision Medicine changed significantly over time.

As a scientific endeavour, Precision Medicine is inextricably tied to the new capabilities offered by technological advancement. At its very beginning, it was the untapped potential of the Human Genome Project that sparkled the quest for a more personalized form of medicine. Still today, research on Precision Medicine is intertwined with the possibilities of emerging trends and advancements in technology.

One of these technological advancements is the use of digital technologies called medical wearable devices (or simply wearables). Such devices are designed to be worn constantly and can provide an indication of biological and health parameters of a specific individual. The possibility of using these devices for monitoring an individual’s physiology, obtaining a continuous, precise, and personalised data collection, and resulting in a continuous stream of clinical data from a single patient without spending time and money on medical visits, is very promising for tailored healthcare, and has been presented and marketed as a new medical revolution. However, we present three reasons for exercising caution.

The first concerns the fact that wearables are simultaneously a consumer item and clinical tool. Indeed, many wearables are nothing but software merged into non-medical wearable devices, such as a fitness trackers. This characteristics blurs the lines between what is medically advisable and what is commercially profitable. There is the need to come up with new conceptual and regulatory approaches to understanding what counts as a medical device, as well as greater collaboration between users, data scientists, clinicians, payers, and governments in matters such as device security, user privacy, data standardization, regulatory approval, and clinical validity (Babu et al. 2024).

The second reason to exercize caution is related to increased dangers of overdiagnosis, self-diagnosis, and unwarranted medicalization of life conditions. Wearables monitor a whole array of biomarkers without interruption and this can lead to interpreting them as pathological biological variations, variations that without the device would hardly be noticed and might not carry clinical interest. This over-focus blurs the lines between what counts as abnormal and pathological, and muds the waters on the difference between risk factors and diseases. Moreover, these potential pitfalls compound with doubts over one of the conceptual premises that ground the use of wearables: that measuring will be helpful, and that data will be meaningful. Self-tracking data does not necessarily make sense to users, as a growing body of studies on the concept of data ambivalence and the issues concerning self-tracking shows (Lomborg, Langstrup and Andersen 2020; Wieczorek et al. 2023).

The third reason to question the indiscriminate use of wearables relates to the conceptual difference between Precision and Personalized Medicine or, more precisely, the lack of understanding of such a difference. Wearables are marketed to users through a conceptual vocabulary closer to the one of old Personalized Medicine, to better cater to the market’s needs and requirements; at the same time, they are presented to agencies of medic- and pharmaco-regulation as devices fully in line with the current premises of contemporary Precision Medicine. This ambivalence points to the risks of unfulfilled expectations and, consequentially, an erosion of trust relationships between the general public and the clinical experts.

REFERENCES:

Ali-Khan S, Kowal S, Luth W, Gold R, Bubela T. Terminology for Personalized Medicine: a systematic collection. PAGEOMICS. Jan 2016

Babu M, Lautman Z, Lin X, Sobota MHB, Snyder MP. Wearable Devices: Implications for Precision Medicine and the Future of Health Care. Annu Rev Med. 2024 Jan 29;75:401-415.

Lomborg S, Langstrup H, Andersen TO. Interpretation as luxury: Heart patients living with data doubt, hope, and anxiety. Big Data & Society. 2020 May;7(1)

Phillips CJ. Precision medicine and its imprecise history. Harvard Data Science Review. 2020 Jan 31;2(1).

Roden DM, Tyndale RF. Genomic medicine, precision medicine, personalized medicine: what's in a name?. Clinical Pharmacology & Therapeutics. 2013 Aug;94(2):169-72.

Wieczorek M, O’Brolchain F, Saghai Y, Gordijn B. The ethics of self-tracking. A comprehensive review of the literature. Ethics & Behavior. 2023 May 19;33(4):239-71.

15:55
Human Evolution and the Hunt for a Primate Model: A Lesson in Analogical Model Selection

ABSTRACT. Philosophers have long analyzed analogical reasoning, which involves using known similarities between two systems to infer additional parallels of interest. However, much of the focus has been on providing a formal characterization or justification of these arguments (e.g., Bartha 2019), while less attention has been given to the criteria by which scientists select source domains. Yet, choosing—and revising in light of new evidence and conceptual developments—the appropriate source domain is a crucial issue across multiple scientific disciplines. This is particularly true in the historical sciences, which study past events and entities. Lacking the ability to conduct direct experiments, historical scientists—such as paleontologists—often rely on contemporary organisms as reference points. Wisely choosing the analog of an extinct organism often allows us to infer some additional information about it, even going so far as to reconstruct aspects of the behavior of extinct animals that at first glance do not seem to have suitable analogs in the contemporary world (see for example Currie 2015). This is also true about reconstructing the morphology and behavior of the species that make up the early stages of human evolution. Paleoanthropologists have long debated which primate species should serve as a model for the last common ancestor (LCA) of humans and the genus Pan (chimpanzees and bonobos). They have also questioned the very role of these models—whether a single species should serve as a reference, whether different models should apply to different evolutionary stages or traits, and so on (see, e.g., Tooby & DeVore 1987). Over time, various species have been proposed as models, ranging from chimpanzees (typically the favored choice) to baboons, bonobos, gorillas, orangutans, gibbons, and even some monkey species. There is an explanation for paleoanthropologists’ lack of consensus on which should be the primate model, or so I will argue. In particular, following the terminology of McNulty (2010), we can distinguish between top-down approaches, which prefer living species of primates to infer LCA traits, and bottom-up approaches, which instead give priority to fossils. As I will show, this division allows us to separate paleoanthropologists into two camps: those who favor a top-down approach and typically select chimpanzees as models (e.g., McGrew 2010), and those who favor a bottom-up approach, often choosing other taxa or even rejecting the choice of a living species as model altogether (e.g., Sayers et al. 2012; White et al. 2015). So, if there is no consensus on which primate should be the preferred model is because researchers weigh different lines of evidence differently. The interplay between these two approaches also explains how paleoanthropological model selection evolves over time in response to new empirical discoveries and theoretical advancements. Ultimately, this diversity of perspectives proves beneficial, as it compels researchers to continually reassess their models, preventing any single framework from becoming entrenched without scrutiny. This presentation will be divided into four parts. The first part will give a very brief overview of the work done in general philosophy of science on analogical arguments. In the second part, I will introduce the main points of the debate surrounding model organism selection in the study of early human evolution. In the third part I will introduce the distinction between top-down and bottom-up approaches, and show how the adoption of one approach rather than another leads to a division in the positions defended within the paleoanthropological community. Finally, I will sketch what might be the broader philosophical implications for the study of analogical reasoning in science.

References Bartha, P. (2019). Analogy and analogical reasoning. In E. N. Zalta (Ed.), Stanford Encyclopedia of Philosophy. Currie, A. (2015). Marsupial lions and methodological omnivory: function, success and reconstruction in paleobiology. Biology & Philosophy, 30, 187-209. McGrew, W. C. (2010). In search of the last common ancestor: new findings on wild chimpanzees. Philosophical transactions of the Royal Society B: Biological sciences, 365(1556), 3267-3276. McNulty, K. P. (2010). Apes and tricksters: the evolution and diversification of humans’ closest relatives. Evolution: Education and Outreach, 3, 322-332. Sayers, K., M. A. Raghanti, and C. O. Lovejoy. 2012. Human evolution and the chimpanzee referential doctrine. Annual Review of Anthropology 41: 119–138. Tooby, J., & I. DeVore. 1987. The reconstruction of hominid behavioral evolution through strategic modeling. In W. G. Kinzey, ed., The Evolution of Human Behavior: Primate Models, 183–238. Albany, NY: SUNY Press. White, T. D., C. O. Lovejoy, B. Asfaw, J. P. Carlson, and G. Suwa. 2015. Neither chimpanzee nor human, Ardipithecus reveals the surprising ancestry of both. Proceedings of the National Academy of Sciences 112: 4877–4884.

14:45-16:30 Session 9B: Parallel Session: Philosophy of Logic
14:45
Beyond Validity: A Generalization of the Collapse Argument Against Logical Pluralism

ABSTRACT. Logical pluralism holds that there is more than one correct logic. One of the most discussed variants of logical pluralism is the one presented by Beall and Restall (2006). The essence of their proposal lies in a generalization of the Tarskian approach. According to the latter, an inference is valid if and only if every model of the premises is also a model of the conclusion. Beall and Restall propose to replace ‘models’ with ‘cases’, a type of entity in which sentences can be true. Depending on the cases considered, there may be different notions of consequence. If more than one type of case is admitted, a form of pluralism is obtained.

Such a form of pluralism faces a significant challenge known as the collapse argument, which suggests that pluralism becomes indistinguishable from monism under specific conditions (Priest, 2006). While this argument has received considerable attention in recent literature, its full scope has not been fully explored. This paper pushes the argument further by shifting the focus from validity to other (related) notions. More specifically, it demonstrates that the argument can be extended to encompass other logical notions, particularly those related to what we might call the "negative side" of logic.

Discussions surrounding collapse remain largely constrained by an underlying assumption: logical pluralism should be evaluated solely through the lens of validity. Indeed, the argument, as traditionally presented, focuses on cases where one logic validates an inference while another does not. Stei (2020) has shown that the argument can be applied to any version of pluralism that assumes minimal conditions about the logics involved. The conditions are: (i) there is more than one correct logical consequence relation within one language, (ii) logical consequence has a global scope, (iii) there is rivalry between different correct consequence relations, and (iv) logical consequence is normative. Some of these conditions had been further relaxed. However, even these generalizations remain confined to understanding logic primarily through the lens of validity. Our work reveals that the collapse extends further, affecting logical notions or “statuses” such as antivalidity and countervalidity, which function independently of validity and concern what logic rejects rather than what it accepts.

As Rosenblatt (2021) notes, by focusing on validity, logical theorizing has emphasized what is accepted but has largely neglected the complementary notion of rejection. Antivalidity concerns inferences where the premises are always designated, and the conclusion is never designated. In contrast, countervalid inferences are such that, if their premises are designated, then their conclusion is not. These notions can be said to correspond to the “negative” aspect of logic and prove particularly relevant when considering the epistemic role of logical systems (Cobreros et al., 2020; Scambler, 2020; Barrio & Pailos, 2022). While validity can be interpreted as providing guidance about what should be accepted, antivalidity and countervalidity offer crucial insights about what should be rejected. As it will be argued, this connects directly to fundamental epistemic practices.

The paper demonstrates how these negative notions can generate their forms of collapse. Consider two logics that agree on their valid inferences but differ in what they deem antivalid or countervalid. Classical Logic (CL) and the non-transitive logic ST present such a case. Suppose an inference I is deemed antivalid or countervalid in L1 but not in L2. If an agent endorses both logics and accepts the premises of I, then the agent will reject the conclusion of such an inference. Concretely, L1 recommends rejecting the conclusion, while L2 remains silent. Thus, the criterion for antivalidity or countervalidity will collapse into L1. This mirrors the original argument but applies to rejection rather than acceptance, demonstrating that pluralism collapses also in terms of its negative counterpart.

This generalization reveals that the threat to pluralism is more pervasive than previously recognized. Attempts to salvage pluralism must address not only the traditional collapse argument but also its extensions to other logical notions. Further, considering both positive and negative inferential roles has implications for logic itself and for broader epistemic and normative frameworks. Some of these will be considered.

15:20
Logical Systems and Logical Theories

ABSTRACT. In this talk, I explore the distinction between logical systems and logical theories, emphasising that a logical theory cannot be reduced to mere formalism. Traditionally, logical monism equates a logical theory with a single correct logical system, while logical pluralism extends this to a collection of systems. However, the real challenge lies in questioning the topic-neutrality of logic, which posits that correct logical consequence relations apply in the same way regardless of the topic under consideration.

I propose a formal characterisation of logical theories, defining them as three-tuples consisting of a set of topics, a set of logical systems, and a relation specifying their applicability. This framework allows for both topic-neutral and topic-specific versions of logical pluralism and monism, as well as alternative perspectives such as relativism, universalism, and systematicity.

15:55
Outside the purview of truth and falsity

ABSTRACT. There have been many debates about what it means for logic to be normative and what can count as epistemic goals for choosing a logical system. In those debates, some positions have argued that logical theories are not only descriptive features of a consequence relation between sentences, but that they have in mind certain epistemic aims and (truth-)norms (Field, 2009; Blake-Turner & Russell, 2018). Epistemic truth-norms such as “believe true propositions”, or its reverse side of the coin “avoid believing false propositions”, are the common practice that the pluralists tend to have in mind for that task. This is also standard practice in many epistemological fields, even when the norms tend to get more and more complex in order to avoid their own set of problems (McHugh, 2012; Olinder, 2012; Wedgewood, 2002; Sorensen, 1988; Steglich-Petersen, 2013). However, for this telic pluralism to be a (truly) tenable position, each logical system must be accompanied by their own truth-norms. In this work I will present a set of truth-norms for a paracomplete and paraconsistent logic that avoids several criticisms made by the Collapse Argument defenders (Stei, 2020a, 2020b) and from the epistemological literature. Moreover, I will analyze what can be said about truth-norms with regards to metainferential systems (Pailos & Da Ré, 2023), this is, logics that differ not only in the set of inferences they take as valid, and how this affects the pluralist position on the face of a different type of collapse argument. The type of questions I intend to answer are the following: can two different logical systems have the same type of truth-norms? In what sense is that not a collapse? What happens with systems that differ in their metainferences, but not in their set of truth-norms? And finally, is it possible to have a truth-norm pluralism, and thus a full-blooded pluralism in terms of epistemic norms? Might this be necessary?

17:00-18:00 Session 10: Keynote: Massimo Pigliucci
17:00
Philosophy of pseudoscience: Where are we now, and how did we get here?