Explaining the Black Box: Rule Extraction, Statistical Inference and Scientific Explanation in AI
ABSTRACT. In recent years, the dual quest for understanding natural phenomena (De Regt 2017; Khalifa 2013, 2017) and explaining artificial intelligence has converged in developing many explanatory methods within Explainable Artificial Intelligence (XAI). Traditional scientific inquiry has long relied on models of explanation, most notably, among others, the Hempelian covering law model and its deductive‐nomological (D–N) formulation (Hempel and Oppenheim 1948), which argues that phenomena can be explained by logically deducing specific events from general laws and initial conditions. However, with the advent of complex, AI‐driven models such as artificial neural networks (ANNs) and convolutional neural networks (CNNs), the classical framework has encountered significant challenges. Now, AI systems are often perceived as “black boxes” whose decision processes are obscure, thereby prompting researchers to develop solutions as rule‐based approaches that extract human‐interpretable regularities or “if–then” rules from these systems.
This paper surveys the conceptual foundations of scientific explanation, juxtaposing classical models with contemporary rule extraction techniques in XAI. The Hempelian model posits that a valid scientific explanation must involve a deductively valid argument where the explanandum (the phenomenon to be explained) follows necessarily from the explanans (a set of general laws and conditions). While elegant in its logical rigor, this model has faced critiques, notably from Salmon (1971), who argues that not all genuine explanations are deductive and that many rely on causal mechanisms and probabilistic reasoning. Salmon’s critique underscores the limitations of the covering law approach, particularly its inability to account for cases where statistically significant correlations or inductive generalizations play a central role. These criticisms are especially pertinent when considering rule‐based explanations in AI, where the extracted rules are typically inductive, capturing patterns observed in large datasets rather than deducing outcomes from universal laws (Páez 2019; Durán 2021; Fleisher 2022; Räz and Beisbart 2022; Sullivan 2022).
As AI has evolved from early expert systems to today’s deep learning architectures, the need for transparent and interpretable models has intensified. Rule‐based explanations have emerged as a promising avenue for achieving this transparency. Techniques such as LIME (Local Interpretable Model-agnostic Explanations) illustrate this approach: by locally approximating a complex model’s behavior around a specific instance, LIME perturbs input data, such as images, segmenting them into interpretable regions (or superpixels) and then inferring the contributions of these regions to the final prediction. In doing so, rule‐based methods bridge the gap between the opaque, high-dimensional representations within neural networks and the human need for understandable explanations.
However, a key challenge remains: while rule‐based explanations offer significant practical utility, they often fall short of the deductive and causal rigor demanded by classical scientific models. The rules extracted are primarily inductive generalizations that capture statistical regularities rather than explicit causal mechanisms. This raises important questions regarding their soundness as genuine explanations. Can rules that merely correlate input features with outputs truly account for “why” a decision is made? In addressing this issue, the paper explores the inherent tension between the complexity of the internal representations in deep neural networks and the necessity for explanations to be both faithful to the model and intelligible to human users. While detailed rules may offer a closer approximation of the model’s decision process, overly complex or abstract rules risk alienating the end user, ultimately defeating the purpose of XAI. Balancing fidelity with simplicity is therefore a central concern in the design of rule‐based explanations.
This work situates rule‐based explanation as a critical step toward reconciling classical models of scientific explanation with the demands of modern, data‐driven AI. Although such explanations may not fully meet the strict deductive standards of the Hempelian covering law model, they play an indispensable role in making AI decisions transparent, accountable, and trustworthy. By examining both the philosophical foundations and the practical methodologies of rule extraction, this paper contributes to an ongoing dialogue on how best to “explain” AI in a manner that is both technically sound and accessible to diverse stakeholders.
References
De Regt, H., W., (2017), Understanding Scientific Understanding, Oxford: Oxford University Press.
Durán, J. M., (2021), “Dissecting scientific explanation in AI (sXAI): A case for medicine and healthcare”, Artificial Intelligence, 297, 1-14.
Fleisher, W., (2022), “Understanding, Idealization, and Explainable AI”, Episteme, 19(4), 534-560.
Hempel, Carl G. and Paul Oppenheim, (1948) [1965], “Studies in the Logic of Explanation”, Philosophy of Science, 15(2), 135–175. Reprinted in Hempel (1965), 245–290
Khalifa, K., (2013), “The role of explanation in understanding”, Br. J. Philos. Sci., 64, 161-187.
Khalifa, K., (2017), Understanding, Explanation, and Scientific Knowledge, Cambridge: Cambridge University Press.
Páez, A., (2019), “The pragmatic turn in explainable artificial intelligence (XAI)”, Minds and Machines, 29, 441-459.
Räz, T., and Beisbart, C., (2022), “The Importance of Understanding Deep Learning”, Erkenntnis, 1-18.
Salmon, Wesley C., (1971), Statistical Explanation and Statistical Relevance, Pittsburgh, PA: University of Pittsburgh Press.
Sullivan, E., (2022), “Understanding from Machine Learning Models”, Br. J. Philos. Sci., 73(1), 109-133.
Truthlikeness, Machine Learning, and the Bias-Variance Trade-off
ABSTRACT. In many sciences, the idea that our hypotheses and models should adequately approximate some underlying target is crucial. Standard procedures in statistics, like point and interval estimation, curve-fitting and regression analysis, and many methods in the machine learning literature appear as clear cases of such general idea. The philosophical interpretations of such methods, however, remain debated. While some philosophers emphasize how statistical and scientific practice in general can be illuminated by assuming that scientists privilege hypotheses that are likely close to the truth (Festa 2011; Niiniluoto 1987, 2005), others are skeptical and favor a more empiricist view of hypotheses being close to the available observable evidence (Forster and Sober 1994; Sober 1999).
In this paper, we discuss the issue of truth approximation with reference to standard methods used in supervised machine learning. As we argue, one can understand such methods as systematic attempts to maximize the truthlikeness of competing models or hypotheses, i.e., to approximate the unknown truth about the target domain. Our conclusion favors the realist over the empiricist view in the debate of how to interpret current scientific practices in such area. We proceed as follows. First, we introduce supervised machine learning by discussing regression as a typical learning task, where a model is trained on known data to make predictions on new, unknown data points. In such context, the best model can be chosen as the one that minimizes its “empirical” error (as measured by some relevant loss function) relative to available evidence (the “training set”). Such strategy, however, is exposed to the risk of “overfitting”, i.e., of choosing very accurate but very complex models that tend to a have a poor performance on “new” evidence (the “training set”). To address such issue, different strategies of “structural” risk minimization are employed, that allow to choose models which are, at the same
time, reasonably accurate and still “simple” and generalizable (Shalev-Shwartz and Ben-David 2014; Vapnik 2000).
Second, we focus on the so-called bias-variance tradeoff in machine learning, describing the relation between the accuracy of a statistical model, its complexity, and its ability to generalize beyond the set of data on which it is trained (Geman, Bienenstock, and Doursat 1992; Geurts 2010). We then propose to interpret this crucial trade-off in terms of the notion of truthlikeness or verisimilitude (Niiniluoto 1987; Oddie and Cevolani 2022). More precisely, we show that so-called decomposition of bias-variance can be formally construed as a measure of (expected) closeness to evidence. This means, roughly, that choosing models with a low empirical error (i.e., bias) and with a low risk of overfitting is equivalent to maximize the balance between expected (closeness to the) truth of the model’s predictions and a suitably defined notion of their “informativeness”. Third, we discuss how our conclusions impact on the realism-empiricism debate in the field of statistics and machine learning. In particular, we defend the view that structural risk minimization and related strategies to deal with overfitting can be view as rational strategies to maximize the expected truthlikeness of models and hypotheses, i.e., their estimated closeness to the unknown, complete truth about the relevant domain.
ABSTRACT. Modalism – the view that ‘possibly’ and ‘necessarily’ are fundamental modal concepts and that they are irreducible to quantifiers – stands in contrast to possible worlds semantics, which formalises these concepts through an extensional quantificational framework. In other words, unlike the possible worlds semanticist, the modalist believes that quantification over worlds is to be explained in terms of modal operators – not the other way round. Due to the dominance of possible worlds semantics within contemporary modal logic, modalism has been viewed by many as a rather outdated theory. In this paper, however, I aim to challenge this view and argue that modalism remains both defensible and relevant, especially given the growing interest in non-quantificational (see, e.g., Vetter 2011) and proof-theoretic (Kürbis 2015, Parisi 2022) treatments of modal operators.
By relying on the works of Arthur Prior, Kit Fine (1977), and Graeme Forbes (1985, 1989), I first outline the key principles of modalism and its departure from standard quantificational accounts. I then address a major objection to modalism, which has been most powerfully presented by Joseph Melia (2003: 92–97). Melia has claimed that modalist formalisations implicitly mimic those of the first-order language of possible worlds and thus fail to be a theoretically distinct approach. In response, I argue that modalist formalisations are grounded in logical intuition and natural language patterns rather than model-theoretic constructions (such as those of possible worlds semantics). Although there have been some mentions of a similar response in the literature (Forbes 1985: 91; Nolan 2007: 189), it has never been fully developed, and I seek to do so in this paper.
In the second part of the paper, I discuss some important implications of modalism for proof-theoretic and inferentialist approaches to modal logic. Given that modalism does not require an external semantic model (such as a domain of possible worlds) to explain modal truths, it fits naturally with approaches that focus on the inferential role of modal operators within a formal system. Forbes (1985: 82–85) has already introduced a proof-theoretic characterisation of validity for modal logic, where modal operators are governed by natural deduction rules. Here, the necessity operator (□) has both introduction and elimination rules, and the possibility operator (◇) is defined derivatively as ◊A ≝ ¬□¬A. Some prospects for an account of modal operators in terms of rules of inference have been explored by Nils Kürbis (2015). Andrew Parisi (2022) has offered a hypersequent calculus designed to support such an inferentialist theory. Recent advances in proof theory for modal logic are also found in the works of Francesca Poggiolesi and Greg Restall (2012) and Stephen Read (2015).
In order to develop proposals of this kind further, I first examine how modalist proof rules can be systematised in a way that parallels structuralist accounts of meaning in non-modal logics, where logical expressions are understood through their inferential roles rather than model-theoretic interpretations. Additionally, I consider whether a proof-theoretic modalism can be reconciled with a realist stance towards modality. While inferentialism is often associated with anti-realist or conventionalist views, I propose that modal inferences may reflect real modal structures in a way analogous to how inferential rules in arithmetic capture objective mathematical truths. This opens the possibility of 'modal inferential realism' – a view in which the validity of modal inferences is grounded not merely in linguistic practices but in the modal structure of reality itself. Such an approach would provide a middle ground between modal conventionalism and metaphysically heavy-handed realist (be they abstract or concrete) interpretations of possible worlds discourse.
References
1. Forbes, G., 1985. The Metaphysics of Modality. New York: Oxford University Press.
2. Forbes, G., 1989. Languages of Possibility: An Essay in Philosophical Logic. Oxford: Blackwell.
3. Kürbis, N., 2015. Proof-Theoretic Semantics, a Problem with Negation and Prospects for Modality. Journal of Philosophical Logic, 44(6): 713–727.
4. Melia, J., 2003. Modality. Chesham: Acumen.
5. Nolan, D., 2007. Modality by Joseph Melia. Mind, 116(461): 187–190.
6. Parisi, A., 2022. A Hypersequent Solution to the Inferentialist Problem of Modality. Erkenntnis, 87(4): 1605–1633.
7. Poggiolesi, F., Restall, G., 2012. Interpreting and Applying Proof Theories for Modal Logic. In G. Restall and G. Russell (Eds.), New Waves in Philosophical Logic (39– 62). New York: Palgrave Macmillan.
8. Prior, A. N., Fine, K., 1977. Worlds, Times and Selves. London: Duckworth.
9. Read, S., 2015. Semantic Pollution and Syntactic Purity. The Review of Symbolic Logic, 8(4): 649–661.
10. Vetter, B., 2011. Recent Work: Modality without Possible Worlds. Analysis, 71(4): 742–754.
Relational and algebraic semantics for modal weak Kleene logics
ABSTRACT. The basic weak Kleene logics $\Bo$ and $\PWK$ (standing for Bochvar logic and Paraconsistent weak Kleene, respectively) can be introduced semantically as three-valued logics characterized by an infectious non-classical value. They are the logics induced by the following algebra taking as designated values, respectively, $\{1\}$ and $\{1,\ant\}$:
While they received less consideration compared to their strong counterparts, weak Kleene logics have been drawing increasing interest through the recent years. In particular, they can be nicely characterized syntactically ($\cite{CiuniCarrara}$) and as such they belong to the larger family of logics of variable inclusion (\cite{Bonziobook}). Unfortunately an algebraic study of these logics is not fruitful, since from the perspective of (abstract) algebraic logic they are not even protoalgebraic (\cite{FontBook}), and as such they have a very weak connection to their algebraic counterparts. It is possible to overcome this limitatiom though.
Historically weak Kleene logics were introduced by Bochvar (\cite{Bochvar38}) and Hallden (\cite{Hallden}) in a richer language, comprising the following classical recapture operators:
These operators are called external, in the sense that the fragment of the language containing formulae with all variables under the scope of some external operator is precisely a copy of classical logic. The resulting logics are the external counterparts of the previous ones: $\B$ and $\PWKe$. These expansions recover strong algebraic properties
The addition of $\Jdue$ is enough to restore the algebraic connection that basic, non-external Kleene logics lacked: both $\B$ and $\PWKe$ are algebraizable, as proved, respectively, in \cite{Ignoranzasevera} and \cite{BonzioZamp24}. These logics share the quasivariety of Bochvar algebras $\class{BCA}$ as their equivalent algebraic semantics. $\class{BCA}$ was introduced in \cite{FinnGrigolia} and has been recently studied in \cite{SMikBochvar}, which provided a representation theorem of the $\Jdue$-free reduct of Bochvar algebras in terms of P\l onka sums of Boolean algebras (plus additional operations). P\l onka sums \cite{Plo67, Plonka69} are an algebraic construction which allows to construct a new algebra starting from a semilattice direct system of similar algebras. This tool has revealed its efficacy in the algebraic study of algebras connected with weak Kleene logics and, more in general, for the logics of variable inclusion \cite{Bonziobook}.
In \cite{BonzioZamp24} we further expand the language and explore the modalization of external weak Kleene logics. Historically partial attempts have been made by $\cite{Segerberg67}$ and $\cite{Correia}$. Our work is divided into two parts: the first focuses on Kripke-style semantics, the other on algebraic semantics for modal weak Kleene external systems. We introduce the logics $\MB$ and $\MPWK$, respectively modal Bochvar external logic and modal external PWK. The reading of the $\Box$ modality differs between the two systems, according to the underlying propositional logic, but in both case it is infectious in a precise local sense. Using a possible world interpretation, the intuitive reading of $\Box\varphi$ is "$\varphi$ is true at every accessible world" in $\MB$, and "$\varphi$ is non-false at every accessible world" in $\MPWK$. The logics has been axiomatized and a complete Kripke-style semantics is provided for both. The systems are also decidable and easy to extend axiomatically, obtaining completeness results w.r.t. classes of frames characterized by well-known properties.
In the algebraic part. starting from the algebraizability of the studied modal logics, we introduce their global versions, $g\MB$ and $g\MPWK$, in the sense of the semantic relation over a Kripke frame. The choice to move from local to global logics is motivated by the failure of algebraizability for local modal systems, algebraizability that is recovered once we consider their global versions. We present a study of the algebraic counterparts of these logics, axiomatizing the their equivalent algebraic semantics, which we denote as the quasivarieties $\class{MBCA_B}$ and $\class{MBCA_H}$, that, contrasting the scene for the non-modal, are in this case different for $g\MB$ and $g\MPWK$. We prove a P\l onka-style representation theorem for the $\Jdue$-free reducts of $\class{MBCA_B}$ and $\class{MBCA_H}$, which results to be the regularization of a some structure very close to Boolean algebras with operators $\class{BAO}$, which become proper $\class{BAO}$ once we consider the intersection of the two logics. Finally we explores some of the subquasivarieties of these classes, which correspond to standard extensions of the basic modal logics $g\MB$ and $g\MPWK$ which are characterized by well-known frame properties from the side of their Kripke semantics.
Pseudoscience as a Cognitive Closure Mechanism: Homogeneity, Negationism, and the Illusion of Understanding
ABSTRACT. Pseudoscience, characterized by its resistance to falsification and its appeal to intuitive reasoning, plays a crucial role in fulfilling the human need for cognitive closure. Unlike scientific inquiry, which is inherently provisional and fragmented, pseudoscience offers a homogeneous explanatory system that delivers immediate and seemingly definitive answers to complex phenomena. This paper examines the psychological and epistemic mechanisms underlying the appeal of pseudoscience, focusing on its function as a closure-inducing belief system. I argue that the homogeneity of pseudoscientific discourse, which provides internally consistent yet empirically unsubstantiated explanations, serves as an epistemic refuge for those seeking certainty.
A central aspect of this analysis is the role of pseudoscience in fostering negationist attitudes—such as climate change denialism, vaccine skepticism, and flat Earth beliefs. These examples demonstrate that pseudoscientific frameworks do not simply propose alternative explanations but actively reject the mainstream scientific consensus. This deliberate rejection satisfies a cognitive need for a coherent and closed system, in which counterevidence is dismissed or reinterpreted to reinforce prior beliefs (Lewandowsky et al., 2012; Hansson, 2017). Thus, the negationist dimension of pseudoscience can be seen as a defensive mechanism against the discomfort caused by scientific uncertainty and complexity (Festinger, 1957; Kruglanski, 2004).
To frame this discussion, I integrate epistemological and psychological perspectives that highlight the contrast between scientific and pseudoscientific reasoning. From an epistemological standpoint, the philosophy of science acknowledges the ‘patchwork’ nature of scientific knowledge—where theories evolve, interact, and sometimes conflict before converging toward a refined understanding (Laudan, 1977; Longino, 1990). In stark contrast, pseudoscience exhibits “epistemic rigidity,” a steadfast refusal to revise core assumptions in light of new evidence. This rigidity enhances its persuasiveness by reducing the cognitive effort required to engage with uncertainty (Osman et al., 2022).
Psychologically, research on the need for closure (Kruglanski et al., 1997) shows that individuals with a high desire for certainty are prone to favoring simplified, black-and-white explanations. Pseudoscience readily caters to this preference, tapping into cognitive biases such as the illusion of explanatory depth (Rozenblit & Keil, 2002), motivated reasoning (Kunda, 1990), and a generalized mistrust of scientific authorities (Lewandowsky et al., 2013). These biases work together to reinforce the self-sealing nature of pseudoscientific belief systems, making them particularly resistant to refutation (Boudry & Braeckman, 2011).
Building on these insights, I propose that pseudoscience functions as an epistemic coping strategy with two key roles: (1) providing an illusion of understanding through a coherent, unfalsifiable framework, and (2) shielding individuals from the discomfort associated with scientific complexity. This perspective has significant implications for science communication and public policy. Rather than solely debunking pseudoscientific claims, effective interventions should target the underlying psychological needs that drive their adoption.
In conclusion, the prevalence of pseudoscience is not merely a byproduct of misinformation or low scientific literacy, but also reflects deeper epistemic and psychological needs. Viewing pseudoscience as a cognitive closure mechanism provides fresh insights into its resilience against counterevidence and scientific progress. This understanding can inform both theoretical developments in the philosophy of science and practical efforts in education and policy-making, ultimately fostering greater epistemic openness and critical thinking.
References:
Boudry, M., & Braeckman, J. (2011). Immunizing strategies and epistemic defense mechanisms. Philosophia, 39(1), 145–161.
Chinn, C. A., & Malhotra, B. A. (2002). Epistemologically authentic inquiry in schools: A theoretical framework for evaluating inquiry tasks. Science Education, 86(2), 175–218.
Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.
Hansson, S. O. (2017). Science denial as a form of pseudoscience. Studies in History and Philosophy of Science, 63, 39–47.
Kruglanski, A. W. (2004). The psychology of closed-mindedness. Psychology Press.
Kruglanski, A. W., Webster, D. M., & Klem, A. (1997). Motivated resistance and openness to persuasion in the presence or absence of prior information. Journal of Personality and Social Psychology, 73(5), 1034–1049.
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498.
Laudan, L. (1977). Progress and its problems: Towards a theory of scientific growth. University of California Press.
Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2013). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131.
Lobato, E. J. C., Mendoza, J., Sims, V. K., & Chin, M. G. (2014). Examining the relationship between conspiracy theories, paranormal beliefs, and pseudoscience acceptance. Applied Cognitive Psychology, 28(5), 617–625.
Longino, H. (1990). Science as social knowledge: Values and objectivity in scientific inquiry. Princeton University Press.
Osman, M., Heath, H., Evans, J. S. B. T., Stanovich, K. E., & Over, D. E. (2022). The psychological basis of pseudoscientific beliefs: A dual-process perspective. Trends in Cognitive Sciences, 26(3), 213–224.
Rozenblit, L., & Keil, F. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26(5), 521–562.
ABSTRACT. This paper introduces Progress Crucial Realism (PCR), a framework designed to merge the empirical focus of Progress Realism (PR) (Saatsi, 2020), with the onto- logical depth of Truth-Content Realism (T-CR). (PCR) addresses critical limitations in (PR) and (T-CR) by advocating a minimalist yet robust realist stance. Through this lens, (PCR) aims to align with empirical rigor while minimizing metaphysical commitments, providing a balance between (PR) and (T-CR), i.e. avoiding specula- tion and embracing realism about the indispensable components of scientific theories. (PR) emphasizes the representational strength of scientific theories and rejects specu- lative metaphysical assumptions, contra (T-CR). Saatsi argues that theories’ empirical successes, such as their predictive power and explanatory capabilities, justify a realist stance without committing to unobservable entities or deep metaphysical frameworks. However, (PR) faces criticism for its inability to address ontological questions about key scientific constructs, e.g. “spin” in quantum mechanics (QM) (Egg, 2021). (PCR) addresses these critiques of (PR) by introducing the concept of crucial objects, defined as the indispensable mathematical and structural elements required for a theory’s for- mulation and (thus) empirical success. (PCR) posits that these objects are not merely tools for representation but reflect the deeper structure of physical reality. For exam- ple, the Hamiltonian operator is a crucial object in QM because it is indispensable for the theory’s formulation and dynamics. In contrast, entities like wave function, while widely used, are not crucial objects since alternative mathematical formulations can re- place them without compromising the theory’s content. The central tenet is that only crucial objects of a theory deserve an ontological status. In this way, (PCR) ensures the presence of metaphysical commitments, but these are confined to the minimal math- ematical entities required to construct the theory. (PCR) thus extends the scope of (PR) while preserving its commitment to empirical validation and avoiding (T-CR)’s deep metaphysical speculations (Albert, 2023), which run into unnecessary complexity. Overall, (PCR) is a balanced alternative to existing frameworks. It operates through the application of four criteria:
(i) Identify a theory that accounts for specific set of empirical successes (e.g., QM).
(ii) Determine the crucial objects of the theory.
(iii) Provide an empirical interpretation of these crucial objects, specifying what they refer to in the physical world.
(iv) Constrain metaphysical commitments strictly to these objects and their empirical significance.
This methodology ensures that metaphysical commitments are minimal, connect- ing a realism grounded in empirical necessity. Our work also outlines three possible strategies for resolving the problem of underdetermination:
• Look for the common mathematical structures shared across all different empiri- cally equivalent mathematical theories; these shared structures, if they exist, can provide a natural basis for identifying crucial objects.
• Choose between the alternative theories without relying on additional metaphys- ical commitments, such as super-empirical virtues. Instead, one can select the theory that entails the smallest number of metaphysical commitments, and thus the one with fewer crucial objects.
• A more cautious eliminativist approach suggests that such dualities may indicate the absence of a more fundamental theory. From this perspective, the lack of a unified mathematical framework is a sign that an overarching theory has yet to be discovered. Consequently, it would be premature to commit to any crucial objects at this stage, as the current formulations may only represent incomplete descriptions of the phenomena.
Finally, we apply (PCR) to the case of spin in Quantum Field Theory, and we argue that spin is identified as a crucial object because of its indispensable role in the classification of particles and in the formulation of fundamental equations like Dirac equation. As described by Wigner’s theorem, spin is mathematically a Casimir oper- ator used to classify irreducible representations of the Poincar´e group. Moreover, we argue that spin also has an empirical interpretation, as in the Stern-Gerlach experi- ment. This classifies spin as a real and fundamental property of the physical world by (PCR) since it is a crucial object (ii) of an empirically confirmed physical theory (i) with an appropriate physical interpretation (iii). While (PCR) emphasizes minimal metaphysical commitments tied to the best available theories, the concept of crucial objects could also provide a framework for navigating the layered ontologies of physical theories across different scales.(PCR) emphasizes minimal metaphysical commitments tied to the best available theories, the concept of crucial objects could also provide a framework for navigating the layered ontologies of physical theories across different scales.
References
Albert, D. Z. (2023). A Guess at the Riddle: Essays on the Physical Underpinnings of Quantum Mechanics. London, England: Harvard University Press.
Egg, M. (2021). Quantum ontology without speculation. European Journal for Philosophy of Science 11 (1), 1–26.
Saatsi, J. (2020). Truth vs. progress realism about spin. In J. Saatsi and S. French (Eds.), Scientific Realism and the Quantum. Oxford University Press.
Unrestricted Connexive Conditionals, over Classical Logic
ABSTRACT. Connexive principles trace back to the early accounts of conditionals, but remain difficult to formalize given their contraclassicality. This questions the possibility of a semantics (1) validating all connexive and some basic conditional principles unrestrictedly (2) over a classical extensional base, in a way (3) compatible with both desiderata of the general literature on conditionals and a notion of connection. I show that this is possible. The logic CX combines total minimal choice-functional frames with a semantic clause for conditionals expressing connections at world-witnesses for antecedents. Its hyper- and strongly connexive conditionals validate identity, modus ponens and excluded middle, invalidate material implication paradoxes, antecedent strengthening, contraposition, explosion and implosion, but also simplification and distribution over conjunction, which only hold for possible antecedents. This, however, is expected given CX non-vacuist interpretation of impossible antecedents.
Can Scientific Communities Profit from Evaluative Diversity?
ABSTRACT. Science is far from being an individual enterprise. Scientists spend a large part of their time talking to colleagues, attending conferences, and, more generally, learning from others. Yet, at times, scientists may judge the quality of the same scientific approach rather differently even if they belong to the same community (Chang, 2017; Longino, 2019; Schindler, 2021; Ward, 2022). Accordingly, one may wonder what the best strategy for a scientist in this situation is: should one mainly engage and learn from peers who share their evaluative criteria, or should one seek out a more varied community? More broadly, what kind of epistemic environment is most effective, one where evaluative criteria are uniformly shared, or one characterised by diverse criteria?
This paper addresses these questions using an agent-based model, based on the NK framework widely applied in biology, economics, and philosophy (Reijula et al., 2023; Wu, 2023). We extend this model to represent a group of problem solvers who, while exploring the same set of solutions, may evaluate them differently and, consequently, pursue different ones.
We show that, perhaps surprisingly, problem solvers with moderately diverse criteria are on average more successful than problem solvers in homogeneous groups. Heterogeneous communities cover a broader range of solutions, as an agent might find a solution that, though not beneficial for her, may be ideal for another. This diversity proves particularly beneficial when tackling complex problems. Furthermore, we find that while homogeneous communities benefit from limited information flow, in line with existing results in the literature (Lazer and Friedman, 2007; Zollman, 2007), heterogeneous ones thrive from continuous information exchange. Agents with different evaluative criteria profit greatly from learning about the progress made by others.
This qualifies further the received view on the ideal communication structure of a scientific community (Zollman, 2007). Therefore, we argue that more empirical work on the extent to which problem solvers actually use different criteria is needed to understand which communication structure is more beneficial (Wu and O’Connor, 2023). Additionally, our results contribute to the increasingly large literature on epistemic diversity (Reijula and Kuorikoski, 2022), by drawing attention on an overlooked type of diversity, that is diversity of evaluative criteria.
Finally, our model provides a practical argument against the worry that the regular use of different non-epistemic values by scientists may hinder inquiry. Recently, philosophers have argued that scientists should use their non-epistemic values in scientific decisions (Douglas, 2009), or that they cannot avoid doing so (Biddle, 2013). However, one may be worried that this may slow down inquiry, or even abate the benefits of collaboration (Chang, 2012. Contra this worry, our results suggest that science may benefit from the diversity of values.
References.
Biddle, Justin (2013). “State of the field: Transient underdetermination and
values in science”. In: Studies in History and Philosophy of Science Part A
44.1, pp. 124–133.
Chang, Hasok (2012). Is Water H2O? Evidence, Pluralism and Realism. Springer.
— (2017). “Is pluralism compatible with scientific realism?” In: The Routledge
handbook of scientific realism. Routledge, pp. 176–186.
Douglas, Heather (2009). Science, policy, and the value-free ideal. University of
Pittsburgh Pre.
Lazer, David and Allan Friedman (2007). “The network structure of exploration
and exploitation”. In: Administrative science quarterly 52.4, pp. 667–694.
Longino, Helen E (2019). Studying human behavior: How scientists investigate
aggression and sexuality. University of Chicago Press.
Reijula, Samuli and Jaakko Kuorikoski (2022). “The Diversity-Ability Trade-Off
in Scientific Problem Solving”. In: Philosophy of Science (Supplement).
Reijula, Samuli, Jaakko Kuorikoski, and Miles MacLeod (2023). “The division of
cognitive labor and the structure of interdisciplinary problems”. In: Synthese
201.6, p. 214.
Schindler, Samuel (2021). Theoretical Virtues: do scientists think what philoso-
phers think they ought to think? In: Philosophy of Science.
Ward, Zina B (2022). “Disagreement and Values in Science”. In.
The Routledge Handbook of Philosophy of Disagreement
Wu, Jingyi (2023). “Better than Best: Epistemic Landscapes and Diversity of
Practice in Science”. In: Philosophy of Science.
Wu, Jingyi and Cailin O’Connor (2023). “How Should We Promote Transient
Diversity in Science?” In: Synthese 201.2, pp. 1–24. doi: 10.1007/s11229-
023-04037-1.
Zollman, Kevin JS (2007). “The communication structure of epistemic commu-
nities”. In: Philosophy of science 74.5, pp. 574–587.
ABSTRACT. It has been two years since the deaths of Bruno Latour (1947–2022) and Ian Hacking (1936–2023). Reflecting on two interviews published in Italy (Benvenuto, 2001; Zipoli Caiani, Manetti, 2008), in which each scholar shared their thoughts on the other, I decided to explore the mutual references in the works of these thinkers. This contribution presents the findings of that research, uncovering not only an intellectual relationship (“at distance”) that revolved around key themes in the science debate at the turn of the century, but a peculiar intersection of philosophy and social science that could still offer a fresh perspective to look at the crisis of expertise and what a politics of science could be.
The two authors have been discussed together and compared (Kusch, 2009; Simons, Vagelli, 2021), but there seems to be no systematic study of the references each makes to the other. After examining a substantial number of their texts with this aim in mind, one key finding is that from the second half of the 1980s to the mid-1990s it was mostly Hacking who cited Latour (primarily as a reference for studies on experiments and laboratories), but by the end of the 1990s are the references to Hacking in Latour's work that became more frequent. Moreover, the works cited by Latour are almost always the same: Representing and Intervening (1983), The Self-Vindication of Laboratory Science (1992), and The Social Construction of What? (1999). This "turn" occurred at the end of the more intense period of the so-called Science Wars (the works in which Latour and Hacking sought to provide definitive answers to this debate were both published in 1999) and it bears the marks of that context. What Latour found significant in Hacking were his reflections on the success of the natural sciences and the conceptualization of social constructivism (Latour, 2003). These were the same years when Latour was working to move beyond the relativist positions often attributed to the social critique of science, as well as the ‘factualist stance’ of those who ignored the historical nature of scientific activity (Latour, 2004). It is no coincidence that Latour criticized Hacking for assuming the natural-social dichotomy. Hacking partly accepts this critique, acknowledging the unchanging historicity of the natural sciences, but also reaffirming the autonomy of scientific facts, which, once formed, may take paths beyond those initially determined by their historical origins as they confront reality (almost like a principle of demarcation). Almost thirty years after their first research in laboratories, Latour and Hacking seem to have returned to the starting point (at least on the surface). A deeper examination of their respective views on these specific issues, and above all their dialectical tension — what Latour means by stability and what Hacking means by loop effects in the realm of constructivism — still is, I believe, highly valuable for reflecting on contemporary and pressing topics, such as how to conceptualize the natural world and address the historical development of our living environment in the context of climate change.
References
Benvenuto, S. 2001. Bruno Latour. Politiche della natura. https://www.doppiozero.com/bruno-latour-politiche-della-natura.
Hacking, I. 1983. Representing and Intervening. Cambridge: Cambridge University Pres
Hacking, I. 1992. The Self-Vindication of the Laboratory Sciences. In Andrew Pickering (ed.), Science as Practice and Culture. Chicago: University of Chicago Press, pp. 29-64
Hacking, I. 1999. The Social Construction of What? Cambridge, Mass: Harvard University Press.
Kusch, M. 2002. Metaphysical déjà vu: Hacking and Latour on science studies and metaphysics. Studies in History and Philosophy of Science, 33 (2002), pp. 639-647.
Latour, B. 1999. Pandora’s Hope. Essays on the Reality of Science Studies. Cambridge, Mass: Harvard University Press.
Latour, B. 2003. The promises of constructivism. D.Ihde (ed.) Chasing Technology- Matrix of Materiality. Bloomington: Indiana University Press, pp. 27-46.
Latour, B. 2004. Politics of Nature. Cambridge, Mass: Harvard University Press.
Simons, M., Vagelli, M. 2021. Were experiments ever neglected? Ian Hacking and the history of philosophy of experiment. Philosophical Inquiries 9 (1), pp. 167-188.
Zipoli Caini, S., Manetti, D. 2008. Intervista a Ian Hacking. Pianeta Galileo 2008 https://www.consiglio.regione.toscana.it/upload/Pianeta_Galileo/atti/2008/13_zipoli.pdf.
Brain Decoding and Operational Progress in Neuroscience
ABSTRACT. Short abstract:
The introduction of multivariate approaches for brain data analysis has played a key role in advancing neuroscientific research in the last decades. However, researchers have questioned the presumption that multivariate methods give us direct access to mental content, as claimed by some supporters of this new methodology. In this talk, we will analyze this recent episode of scientific change from the perspective of the philosophy of scientific practice, characterizing fMRI-based research as a three-layered experimental practice. We will show how our proposal can offer a fine-grained account of how the introduction of multivariate methods affected neuroscientific practice, describing this episode of scientific change as a case of operational progress.
A Stage Theory for Structured Properties and Propositions
ABSTRACT. Many philosophers are attracted to fine-grained accounts of properties, propositions, and relations (PPRs). The structuralist view, asserting that PPRs are built up by their constituents, has gained increasing attention over recent decades. However, it has been argued that the Russell-Myhill paradox shows the inconsistency of this view within higher-order logic. In response, I propose a stage theory in which structured entities are iteratively generated, thereby preventing the paradox from arising. On this approach, the universe of PPRs is never available as a whole, but rather it indefinitely expands across the hierarchy of stages. In particular, my proposal suggests a predicative turn of higher-order logic, contending that if the generation of PPRs in stages is taken seriously, it requires the restriction of quantifiers to entities already available at each particular stage, whenever we define new entities.
ABSTRACT. In several articles, Storrs McCall and E.J. Lowe have claimed that endurantism (or three-dimensionalism) and perdurantism (or four-dimensionalism) are “equivalent” and “intertranslatable without remainder” (see [McCall 1994, 214- 216], [McCall and Lowe 2003, 2006] as well as [Lowe 2006]). Call this the 3D/4D Equivalence Thesis.
The 3D/4D Equivalence Thesis, however, is difficult to assess. The main reason is that McCall and Lowe do not explain what they mean by “equivalence” or “intertranslatability”. That’s problematic because in order to state clear and precise criteria for when two theories can be said to be equivalent, philosophers of science have formulated a spectrum of notions of equivalence. It’s by now clear that there are radically different notions that one could work with. For example, [Halvorson 2019] provides a taxonomy of notions of equivalence including notions such as Morita equivalence and categorical equivalence and it is now considered a standard reference (see [Davey 2020]).
To be sure, McCall and Lowe talk about “translation schemes” but many questions are left unanswered. For example, one might want to know what (if any) constraints should be imposed on the notion of a translation scheme, whether a translation scheme is supposed to preserve the logical structure of the two theories (in the sense that it preserves the meaning of the logical terms), whether it should be just theorem-preserving or whether it should also preserve the intuitive meaning of the key terms involved in the debate, and so on. Even more technical questions arise when one starts thinking about translations. For example, one could ask whether a translation f from a theory T1 to a theory T2 and a translation g from T2 to T1 should be inverses. Although McCall and Lowe give some indications or proof of concept on how an endurantist can “interpret” perdurantist talk and vice versa (see, in particular, [McCall and Lowe 2006, 573-574]), they do not explicitly provide a definition of a translation function that maps the endurantist language to the perdurantist one (and vice versa).
Fortunately, Lowe [2006, 722] makes an illuminating analogy. He says that endurantism and perdurantism are “equivalent” in the same sense in which geometry formulated using only the apparatus of points and geometry formulated using only “volumes of space” are equivalent. According to him, just like a geometer who takes points as basic can construct volumes from points and one who takes pointless regions as basic can construct points, an endurantist can construct temporal parts from enduring objects and a perdurantist can construct an enduring object out of temporal parts. It is worth quoting him. Talking about the alleged equivalence between endurantism and perdurantism, he says:
This (...) kind of equivalence is very much like that between the following two different theories of geometry: the theory which sees points as the fundamental geometrical entities and volumes of space as compact sets of points, and the theory which sees volumes of space as the fundamental geometrical entities and points as infinitely nested sets of volumes.
The analogy is illuminating because in a recent article [Barrett and Halvorson 2017] it is shown precisely in which sense systems of geometry formulated using only the apparatus of points and systems of geometry formulated using only the apparatus of lines are “equivalent”. I take the case of lines-based and points-based geometries as an instance of the general “alleged” equivalence between point-based and region-based systems of geometry.
In particular, Barrett and Halvorson have shown that such theories are Morita equivalent. The latter is a formal notion that has recently attracted much interest in the literature on theoretical equivalence since it captures intuitive cases of equivalence that more standard notions such as definitional equivalence or mutual interpretability cannot capture (see, for instance, [Tsementzis 2017, Weatherall 2019, McEldowney 2020, Dewar 2022, 2023, Sider 2020, Teitel 2021, North 2021]). In proving their result, Barrett and Halvorson clarified a crucial premise of the so-called argument from geometry, a famous argument that [Goodman 1975, 1978] and [Putnam 1977] put forward against the doctrine of metaphysical realism.
Lowe’s analogy and Barrett and Halvorson’s result suggest the following: if the equivalence between endurantism and perdurantism is “very much like” that between point-based and line-based geometries, and the latter can be made precise using Morita equivalence, then we should be able to make the former precise using the same notion.
The main objective of this talk, then, is to show that endurantism and perdurantism (or at least a certain axiomatization of such views) are Morita equivalent. This result will thus make precise in which sense the 3D/4D Equivalence Thesis put forward by McCall and Lowe is true.