Integrating Perspectives: Learning from Model Divergence
ABSTRACT. Science deals in the currency of representational models that describe relations thought to hold among natural phenomena. I will argue for the necessity of model pluralism, based on the partiality of scientific representation. Given there are many models, what are their relationships to each other? Reduction/Unification and Elimination/Inconsistency have been richly explored in philosophy of science. Inspired by Giere, I develop an account of perspectivism that provides new resources for understanding a dynamic, integrative form of model-model relationship. As an example I will consider practices in experimental structural biology to detail how divergent protein models can and are integrated to yield more accurate predictions than either perspective could deliver by itself. Jointly refining the inferred representations from x-ray crystallography and nuclear magnetic resonance spectroscopy relies on and preserves the plurality of scientific models.
It is often said that science is “messy” and, because of this messiness, abstract philosophical thinking is only of limited use in analysing science. But in what ways is science messy, and how and why does this messiness surface? Is it an accidental or an integral feature of scientific practice? In this symposium, we try to understand some of the ways in which science is messy and draw out some of the philosophical consequences of taking seriously the notion that science is messy.
The first presenter discusses what scientists themselves say about messy science, and whether they see its messiness as a problem for its functioning. Examining scientists’ reflections about “messy science” can fulfill two complementary purposes. Such an analysis helps to clarify in what ways science can be considered “messy” and thus improves philosophical understanding of everyday research practice. The analysis also points to specific pragmatic challenges in current research that philosophers of science can help address.
The second presenter discusses the implications of “messy science” for scientific epistemology, specifically for scientific justification. They show how this messiness plays itself out in a particular episode in nineteenth century medicine: the transition from mid-nineteenth-century miasma views to late nineteenth-century early germ views by examining different senses in which scientific epistemology may be said to be messy and lay out in what ways such messy scenarios differ from the idealized circumstances of traditional accounts of justification. They conclude by discussing some limits that taking these differences into account will impose on developing practice-based views of scientific justification, explaining how it is still possible for such views to retain epistemic normativity.
The third presenter explores how the messiness of eighteenth-century botanical practice, resulting from a constant lack of information, generated a culture of collaborative publishing. Given the amount of information required for an accurate plant description let alone a taxonomic attribution, eighteenth-century botanists and their readers were fully aware of the preliminary nature of their publications. They openly acknowledged the necessity of updating and correcting them, and developed collaborative strategies for doing so efficiently. Authors updated their own writings in cycles of iterative publishing, most famously Carl Linnaeus, but this could also be done by others, such as the consecutive editors of the unpublished manuscripts of the German botanist Paul Hermann (1646-1695), who became his co-authors in the process.
The fourth presenter investigates how biological classification can sometimes rely on messy metaphysics. Focusing on the lichen symbiont, they explore what grounds we might have for relying on overlapping and conflicting ontologies. Lichens have long been studied and defined as two-part systems composed of a fungus (mycobiont) and a photosynthetic partner (photobiont). This bipartite metaphysics underpins classificatory practices and determines the criteria for stability that rely on the fungus to name lichens despite the fact that some lichens are composed of three or more parts. The presenter investigates how reliable taxonomic information can be gleaned from metaphysics that makes it problematic to even count biological individuals or track lineages.
Jutta Schickore (Indiana University Bloomington, United States)
Scientists’ reflections on messy science
ABSTRACT. It has become a commonplace in recent discussions about scientific practice to point out that science is “messy”, so messy in fact that philosophical concepts and arguments are not very useful for the analysis of science. However, the claim that science is messy is rarely spelled out. What does it mean for science to be messy? Does it mean that scientific concepts are intricate or confused? Or that procedures are dirty and often unreliable? Or that methodological criteria are often quite sloppily applied? Is it really such a novel insight that actual science is messy?
Moreover, it is not entirely clear what such messiness means for philosophy and philosophy’s role for understanding or improving science. Philosophical analysis aims at clarifying concepts and arguments, at making distinctions, and at deriving insights that transcend the particulars of concrete situations. Should we be worried that philosophical concepts and arguments have become too far removed of actual scientific practice to capture how science really works or to provide any guidance to scientists? In my presentation, I want to address these sets of questions in an indirect way. I shift the focus from analyzing scientific concepts, methods, and practices to analyzing scientists’ reflections on scientific practice. In doing so, I carve out a new niche for philosophical thinking about science.
I begin with a brief survey of recent philosophical debates about scientific practice. I want to clarify what it is that analysts of science have in mind when they are referring to the “messiness” of research practices, and also what, in their view, the messiness entails for philosophical analysis. In the main part of my talk, I examine what scientists themselves have said about messy science. Do they acknowledge that science is messy? If so, what aspects of scientific research do they highlight? Do they see the messiness of science as a problem for its functioning, and if so, why? To answer these questions, I draw on a diverse set of materials – among other things, methods sections in experimental reports, articles and editorials in general science journals, as well as interviews with scientists.
Analyzing scientists’ own conceptualizations of scientific research practice proves illuminating in a number of ways. We will see that today, scientists themselves are often reluctant to admit that science is messy – much more reluctant than they were a century or two ago. We will also see that it matters – and why it matters – whether scientists themselves are right or wrong about how science really works. In conclusion, I want to suggest that examining scientists’ reflections about “messy science” can fulfill two complementary purposes. On the one hand, such an analysis helps to clarify in what ways science can be considered “messy” and thus improves philosophical understanding of everyday research practice. On the other hand, this analysis points to specific pragmatic challenges in current research that philosophers of science can help address.
15:45
Jordi Cat (Indiana University Bloomington, United States)
Blur science through blurred images. What the diversity of fuzzy pictures can do for epistemic, methodological and clinical goals
ABSTRACT. Different kinds of images include hand-drawings, analogical and digital photographs, and computer visualizations. I show how historically these have been introduced in an ongoing project of simulation of blurred vision that begins with so-called artificial models, artificial visual aberrations and photographic simulations, and experiments. Computer simulations followed suit, each with their own specific conditions.
I show how the different kinds of pictures, (like the roles and goals they serve), do not always arise to replace others, but instead develop different relations to others and introduce new uses. In the new pictorial regime, research and clinical practice rely on a combination of drawings, different kinds of photographs, and computer visualizations. I show how the simulations and the pictures play a number of roles: providing illustration and classification, prediction, potential explanations (a deeper level of classification), exploration, testing, evidence for or against explanatory hypotheses, evidence for or against the effectiveness of research tests and techniques, evidence for or against the reliability of diagnostic tests and the effectiveness of corrective treatments, and tracking the evolution of conditions and treatments.
Fuzziness or blur in images deserves critical attention as a subject and resource in scientific research practices and clinical interventions. I discuss how the project of engaging blur in vision optics is embedded in a constellation of different mathematical and pictorial tools with different standards and purposes—both investigative and clinical—which are often inseparable. An expression of this is the variety of kinds of pictures of blurred vision, many of which do appear blurred, and their different and shifting roles and uses. Their use runs against the commitment to sharpness as an ideal of, for instance, scientific representation, reasoning, and decision making.
My analysis contradicts and supplements a number of other accounts of the significance of images in terms of their content and use. A central issue I focus on is how the central interest in the phenomenon of blur in visual experience prompts pervasive and endemic considerations of subjectivity and objectivity. Different relations and tensions between standards of subjectivity and objectivity play a key role in the evolution of research and clinical intervention. This aspect finds expression in the interpretation, production, and use of pictures.
Over the past few years, the Causal Bayes net framework --- developed by Spirtes et. al. (2000) and Pearl (2000), and given philosophical expression in Woodward (2004) -- has been successfully spun off into the sciences. From medicine to neuro- and climate-science, there is a resurgence of interest in the methods of causal discovery. The framework offers a perspicuous representation of causal relations, and enables development of methods for inferring causal relations from observational data. These methods are reliable so long as one accepts background assumptions about how underlying causal structure is expressed in observational data. The exact nature and justification of these background assumptions has been a matter of debate from the outset. For example, the causal Markov condition is widely seen as more than a convenient assumption, and rather as encapsulating something essential about causation. In contrast, the causal faithfulness assumption is seen as more akin to a simplicity assumption, saying roughly that the causal world is, in a sense, not too complex. There are other assumptions that have been treated as annoying necessities to get methods of causal discovery off the ground, such as the causal sufficiency assumption (which says roughly that every common cause is measured) and the acyclicity (which implies, for example, that there is no case in which X causes Y, Y causes Z, and Z causes X, forming a cycle). Each of these assumptions has been subject to analysis and methods have been developed to enable causal discovery even when these assumptions are not satisfied. But controversies remain, and we are confronted with some long standing questions: What exactly is the nature of each of those assumptions? Can any of those assumptions be justified? If so, which? How do the question of justification and the question of nature relate to each other?
This symposium aims to address those questions. It brings together a group of researchers all trained in the causal Bayes nets framework, but who have each taken different routes to exploring how we can address the connection between the underlying causal system and the observational data that we use as basis to infer something about that system. In particular, we will discuss a variety of different approaches that go beyond the traditional causal Bayes net framework, such as the discovery of dynamical systems, and the connection between causal and constitutive relations. While the approaches are largely driven by methodological considerations, we expect these contributions to have implications for several other philosophical debates in the foundations of epistemology, the metaphysics of causation, and on natural kinds.
Hanti Lin (University of California, Davis, United States)
Convergence to the Causal Truth and Our Death in the Long Run
ABSTRACT. Learning methods are usually justified in statistics and machine learning by pointing to some of their properties, including (but not limited to) convergence properties: having outputs that converge to the truth as the evidential inputs accumulate indefinitely. But there has long been the Keynesian worry: we are all dead in the long run, so who cares about convergence? This paper sharpens the Keynesian worry and replies to it.
The Keynesian worry challenges the epistemic value of convergence properties. It observes that a guarantee of obtaining the truth (with a high chance) in the long run does not seem to be of epistemic value, because the long run might be too long and we might not live long enough to actually believe in the truth. Worse, some empirical problems pursued in science are very hard, so much so that there is no learning method that is guaranteed to help us obtain the truth---even if we are immortal. Many problems about learning causal structures, for example, are that hard. This is the Keynesian worry on causal steroid. (Reichenbach almost anticipates such hard problems [1], but his example does not really work, or so I argue.) The standard reply guarantees eventual convergence by assuming the Causal Faithfulness Condition [2]. But this amounts to simply assuming away the skeptical scenarios that prevent our belief in the truth.
I defend the epistemic value of various modes of convergence to the truth, with a new reply to the Keynesian worry. Those modes of convergence are epistemically valuable *not* for a consequentialist reason---i.e. not because they provide us any guarantee of such epistemically good outcome as our actually believing in the truth. There is simply *no* such guarantee. The epistemic significance of convergence lies elsewhere.
I argue that modes of convergence to the truth are epistemically valuable for a *non-consequentialist* reason. A good learning method must be one that responds appropriately to evidence, letting evidence play an important role: the role of evidence as a reliable indicator of truth, possibly not perfectly reliable, but becoming reliable in *progressively* more states of the world if the amount of evidence were to increase---all done by making progress in the *best* possible way. This is a role that evidence should play; our longevity plays no role in this picture. And I argue that, thanks to a new theorem, evidence plays that important role only if it serves as input into a learning method that has the right convergence property. In the context of causal discovery, the right convergence property is provably so-called almost everywhere convergence [2,3] plus locally uniform convergence [4].
1. Reichenbach, H. (1938) Experience and Prediction, University of Chicago Press.
2. Spirtes, P., Glymour, C. and Scheines, R. (1993) Causation, Prediction, and Search, the MIT Press.
3. Meek, C. (1995) “Strong Completeness and Faithfulness in Bayesian Networks”, Proceedings of the 11th UAI, 411-418.
4. Lin, H. (forthcoming) “The Hard Problem of Theory Choice: A Case Study on Causal Inference and Its Faithfulness Assumption”, Philosophy of Science.
15:45
Jiji Zhang (Lingnan University, Hong Kong) Kun Zhang (Carnegie Mellon University, United States)
Causal Minimality in the Boolean Approach to Causal Inference
ABSTRACT. In J.L. Mackie’s (1974) influential account of causal regularities, a causal regularity for an effect factor E is a statement expressing that condition C is sufficient and necessary for (the presence or instantiation of) E (relative to a background or causal field), where C is in general a complex Boolean formula involving a number of factors. Without loss of generality, we can put C in disjunctive normal form, a disjunction of conjunctions whose conjuncts express presence or absence of factors. Since C is supposed to be sufficient and necessary for E, each conjunction therein expresses a sufficient condition. Mackie’s requirement is that such a sufficient condition should be minimal, in the sense that no conjunction of a proper subset of the conjuncts is sufficient for E. If this requirement is met, then every (positive or negative) factor that appears in the formula is (at least) an INUS condition: an Insufficient but Non-redundant part of an Unnecessary but Sufficient condition for E.
Mackie’s minimality or non-redundancy requirement has been criticized as too weak (Baumgartner 2008), and a stronger criterion is adopted in some Boolean methods for causal inference, which have found interesting applications in social science (e.g., Ragin and Alexandrovna Sedziaka 2013; Baumgartner and Epple 2014). In addition to minimization of sufficient conditions, the stronger criterion requires that the disjunctive normal form that expresses a necessary condition should be minimally necessary, in the sense that no disjunction of a proper subset of the disjuncts is necessary for the effect.
In this talk we identify another criterion of non-redundancy in this setting, which is a counterpart to the causal minimality condition in the framework of causal Bayes nets (Spirtes et al. 1993; Pearl 2000). We show that this criterion is in general even stronger than the two mentioned above. Moreover, we argue that (1) the reasons for strengthening Mackie’s criterion of non-redundancy support moving all the way to the criterion we identified, and (2) an argument in the literature against the causal minimality condition for causal systems with determinism also challenges Mackie’s criterion of non-redundancy, and an uncompromising response to the argument requires embracing the stronger criterion we identified. Taken together, (1) and (2) suggest that the Boolean approach to causal inference should either abandon its minimality constraint on causal regularities or embrace a stronger one.
References:
Baumgartner, M. 2008. “Regularity theories reassessed.” Philosophia 36: 327-354.
Baumgartner, M., and Epple, R. 2014. “A Coincidence Analysis of a Causal Chain: The Swiss Minaret Vote.” Sociological Methods & Research 43: 280-312.
Mackie, J. L. 1974. The Cement of the Universe: A Study of Causation. Oxford: Clarendon Press.
Pearl, J. 2000. Causality: Models, Reasoning, and Inference. Cambridge, UK: Cambridge University Press.
Ragin, C. C., and Alexandrovna Sedziaka, A. 2013. “QCA and Fuzzy Set Applications to Social Movement Research.” The Wiley-Blackwell Encyclopedia of Social and Political Movements, doi: 10.1002/9780470674871.wbespm482.
Spirtes, P., Glymour, C., and Scheines. R. 1993. Causation, Prediction and Search. New York: Springer-Verlag.
Adolf Grünbaum, former president of DLMPST and IUHPS, had an extraordinary impact on philosophy of science in the 20th century. He died November, 15, 2018 at the age of 95. This symposium honors Grünbaum by considering ideas he addressed in his work, spanning philosophy of physics, logic of scientific reasoning, Freud and psychiatry’s status as a science and religion.
ABSTRACT. Adolf Grunbaum was a great character and a great presence. Anyone who knew him well had stories about him. I will tell some, and recount his arguments against Natural Religion.
15:45
Brian Skyrms (University of California, Irvine, United States)
Adolf Grünbaum on "Zeno's Metrical Paradox of Extension"
ABSTRACT. Adolf Grünbaum explained to "modern eleatics" in Philosophy how measure theory answers Zeno’s "metrical Paradox". I will present a reconstruction of this paradox, ancient responses, and Grünbaum’s analysis. As counterpoint, I argue that Zeno invented a fundamental argument for non-measurable sets and that responses of Democritus and Aristotle remain viable.
A Symposium at CLMPST XVI coordinated by DLMPST/IUHPST and the Gender Gap Project
The project "A Global Approach to the Gender Gap in Mathematical, Computing, and Natural Sciences: How to Measure It, How to Reduce It?" is an international and interdisciplinary effort to better understand the manifestation of the Gender Gap in the named scientific fields, and to help overcome barriers for women in their education and career. The collaboration between eleven partners including various scientific unions allows for a comprehensive consideration of gender-related effects in these fields, yielding the opportunity to elaborate common grounds as well as discipline-specific differences.
Currently, existing data on participation of women in the mathematical and natural sciences is scattered, outdated, and inconsistent across regions and research fields. The project approaches this issue mainly from two different perspectives. Through a survey, scientists and mathematicians worldwide have the opportunity to confidentially share information about their own experiences and views on various aspects of education and work in their disciplines and countries. At the same time, we statistically analyze large data collections on scientific publications in order to understand effects of gender and geographical region on publication and collaboration practices. Moreover, the project aims to provide easy access to materials proven to be useful in encouraging girls and young women to study and pursue education in mathematics and natural sciences.
In this symposium, methods and findings of the Gender Gap project will be presented by Helena Mihaljevic, connected to and contrasted with similar issues in philosophy of science. After three presentations, there will be a panel discussion.
How science loses by failing to address the gender (and other) gaps
ABSTRACT. We standardly think of the gender gap (and other participation gaps) as a harm to those groups not fully participating in the sciences. I want to argue that the sciences are also harmed by failures to be more fully inclusive.
What can publication records tell about the gender gap in STEM?
ABSTRACT. A solid publication record is a key factor for a successful academic career.
Various studies have revealed a systemic gender imbalance in the publication distribution of scientists in various fields.
In the interdisciplinary project "A Global Approach to the Gender Gap in Mathematical, Computing, and Natural Sciences: How to Measure It, How to Reduce It?" we use various large data sources to study publication patterns in particular in mathematics, physics and astronomy with respect to gender, and across countries and regions. In this talk we will present first results in the project and discuss what the differences in publication behaviour can tell us about the current state and possible future developments of the gender gap in the respective fields.
Franz Dietrich (Paris School of Economics & CNRS, France)
Beyond Belief: Logic in Multiple Attitudes (joint work with A. Staras and R. Sugden)
ABSTRACT. Logical models of the mind focus on our beliefs, and how we reasons in beliefs. But we also have desires, intentions, preferences, and other attitudes, and arguably we reason in them, particularly when making decisions. Taking a step towards logic in multiple attitudes, we generalize three classic logical desiderata on beliefs - consistency, completeness, and implication-closedness - towards multiple attitudes.
Our three "logical" desiderata on our attitudes - hence on our psychology - stand in interesting contrast with standard "rationality requirements" on attitudes, such as the requirement of having transitive preferences, non-contradictory beliefs, non-acratic intentions, intentions consistent with preferences, and so on. In a theorem, we show a systematic connection between our logical desiderata and rationality requirements: each logical desideratum on attitudes (i.e., consistency, completeness, or implication-closednes) is equivalent to the satisfaction of a certain class of rationality requirements. Loosely speaking, this result connects
logic with rational choice theory. This has important implications for whether reasoning in multiple attitudes can help us become consistent, complete, or implication-closed in our attitudes.
ABSTRACT. The study of our innate numerical capacities has become an active area of recent cognitive research. Given that these capacities appear to be very limited, it is widely assumed that the use of external aids—such as numeral systems—expands our natural ability to reason about numbers. In fact, people have identified arithmetic as an important case of the use of external aids in thinking, but the question of how these `thinking tools' acquire numerical content remains unsettled. After all, written numerals (say) are material inscriptions that—under some suitable interpretation—could stand for anything whatsoever. What constrains the range of available interpretations so that these otherwise meaningless symbols can achieve their representational aims?
Extant accounts either pull the relevant content out of thin air or make it parasitic on some antecedently available interpretation. On pain of circularity or regress, we have to explain how numerals come to represent what they do without relying on some prior—and mysterious—grasp of their intended interpretation.
I will start with the recognition that numeral symbols, in and of themselves, do not represent anything at all. In isolation, they are representationally idle. It is only by being embedded in broader systems of representation that these symbols acquire numerical content. Numeral systems, I suggest, have distinctive features that relate individual symbols to one another and thereby constrain their representational content.
This, however, still doesn’t uniquely determine the system’s representational target. Our familiar decimal base system, for instance, can stand for linear sequences but it can also stand for circular ones, depending on the case at hand. Thus, I will further argue that systems of numerical representation, in turn, need to be grounded in specific cultural practices, which govern their use and are carried out by agents naturally equipped to exploit some of their distinctive (structural) features.
I will illustrate these claims by means of a case study involving different numeral systems (such as tallies and positional systems) and the practices in which they are deployed (most notably, counting and calculation).
15:45
Oscar Perez (Universidad Nacional de Colombia, Colombia)
Paths of abstraction: between ontology and epistemology in mathematical practice. The Zilber’s trichotomy through the lens of Lautman and Cavaillès
ABSTRACT. Boris Zilber’s famous tricotomy conjecture was stated around 1983 and was the result of several earlier results and observations. Its original formulation had the intention of providing a classification of an important class of “natural” structures in mathematics, namely strongly minimal structures. These correspond to a model theoretic formulation close to categoricity but with very specific classification tools, and includes important classical structures such as algebraically closed fields, vector spaces — two whole areas of mathematical practice are roughly anchored there (classical algebraic geometry and in a general sense, all “linear” structures). The original statement was shown to be false in 1990 by Ehud Hrushovski: he built a strongly minimal structure that does not fit the original tricotomy classification as originally envisioned by Zilber. Later, Zilber and Hrushovski proved a version of the tricotomy for a reduced class of structures, namely, “Zariski geometries”.
In their proof, when they establish the tricotomy, one of the three classes requires building bi-interpretations (basically, if a Zariski geometry is not modular then the corresponding model is bi-interpretable with some field). This works as intended, up to a finite cover. The situation could have been more direct, not involving any such finite cover, but the final result provides bi-interpretability up to a “finite-to-one map”. This state of affairs is taken further afield by Zilber in later work (Non Commutative Geometry, 2005 and Applications to the Model Theory of Physics, 2006).
This raises various questions that we consider worth studying under the light of the philosophy of mathematical practice. For instance; why Zilber’s insistence on further looking for (what he calls) “natural” (or even “a priori”) examples coming from classical mathematics, and corresponding to these finite covers? (The response is yet far from complete from the purely mathematical perspective, but several structures stemming from mathematical physics are now considered very natural candidates.) And why is there an apparent difference between Zilber’s insistence as opposed to Hrushovski’s more “pragmatic” attitude? Is this a mere psychological difference or else is there a deeper philosophical reason? We argue the latter in our presentation: Zilber and Hrushovski follow different kinds of generalization, different types of abstraction. We propose a case analysis of these types of abstraction.
We propose a difference between what we may call ontological abstraction as opposed to epistemological abstraction. The main difference, in philosophical terms, centers in the former’s emphasis on what are structures and where they “live”, versus the latter’s emphasis on how we get to know mathematical structures. Naturally, we do not claim that these two types of abstraction reside in an absolute sense in the mind of a specific mathematician; we do not even claim that the two forms of abstraction are mutually exclusive. But we do observe a certain prevalence of one form over the other in specific cases of mathematical abstraction, illustrated by the story told above.
In mathematical practice, one type of abstraction is able to capture relationships between mathematical objects that the other type of abstraction is not (necessarily). In our case study, Zilber appears to have a motivation of a stronger ontological character — and this enabled him to visualize the novelty of the solution beyond the original tricotomy technical solution, whereas the more epistemological character of Hrushovski’s kind of abstraction enabled him to see the original failure of tricotomy. In the end, the tension between these two types of abstraction results in an iterated process where further abstraction is “fueled” by the alternation of two contending modes. Zilber’s “stubbornness” and Hrushovski’s “pragmaticism” are a more contemporary version of epistemological abstraction (stemming from Bourbaki); however, in many ways Zilber’s insistence (stubbornness?) conceals a kind of abstraction that we perceive as quite novel compared with the predominant forms of abstraction of the past century.
ABSTRACT. The geometrization of logic, undertaken by Wittgenstein in “Tractatus Logico-Philosophicus”, is related to the spatial representation of the complex. The form of the fact is the formula of the complex “aRb”, where “a” is one event, “b” is another one, and “R” is the relationship between them. The geometric form of the complex will be “a-b” or “a ~ b”, where the ratio is expressed as “straight line” or “curve”.
The visualization of life in the form of geometric figures allows comprehending the past in a new way, making the prediction part of the visual image itself, a part of the calculation. The new concept of the picture is a consequence of a new understanding of the experience of life. This is the essence of astrological predictions.
Step-by-step analysis of an astrological picture creation: (1). Initially, based on the calculations, two pictures are created. (2). One picture combines two observations; location of the planets at the time of birth and the current location of the planets. We combined two homogeneous, but chronologically different experiences. (3). The third step is structuring life experience. We are looking for the starting point of the geometric projection. We begin to build the figure of experience. (4). We presented the result to the inquirer and asked him to recall the events on a certain date. He creates a picture of his past. We attempt to include the content of his experience into our picture of the events. (5). Now, making predictions based on the picture itself (and this is computation, since all the elements of the picture are connected by rigid connections), we operate on the experience of the questioner. (6). Astronomical calculations of the points and movements of planets become the rule and bring an intuitive perception of the experience of life. “Experience” is now not only “what was”, but also “what should have been”. The prediction became part of the image itself. Mathematical expectation is the basis of psychological persuasion. According to Wittgenstein, “the cogency of logical proof stands and falls with its geometrical cogency”[1].
Particular interest are the problem of differences in conclusions and predictions and, after all, the transformation of thinking itself in the process of creating a geometrical concept. Astrology, through the ideogrammatical form of the writing, turned out to be related to geometry and mathematics, it became tools for predicting.
Archaic origins of astrological representations, based on the picture, can be found in the most ancient languages - for example, in the Sumerian, where words expressing the meaning of representations about the fortune and fate, derive from the root with the meaning of “drawing” and “mapping”[2]. The hieroglyph denoting the Egyptian god of magic and astrology Thoth corresponded to the Coptic transcription: Dhwty-Deguti. One of the options for translating this name is the word “artist” (W.Goodwin).
REFERENCES
Wittgenstein, L. (2001). Remarks on the foundations of mathematics. Oxford: Basil Blackwell, p.174(43).
Klochkov I.S. Spiritual culture of Babylonia: man, destiny, time [Клочков И.С. Духовная культура Вавилонии: человек, судьба, время]. М.:Nauka, 1983.
Vera Matarese (Institute of Philosophy, Czech Academy of Sciences, Czechia)
Super-Humeanism: a naturalized metaphysical theory?
ABSTRACT. In order to know what reality is like, we can turn either to our best physical theories or to our best metaphysical ones. However, this alternative is not exclusive, since we can always glean knowledge about our physical reality from naturalized metaphysical theories, which are metaphysical theories grounded in physics’ empirical investigations. But how can we evaluate whether a metaphysical theory is sufficiently naturalistic? This question occupies a special role in debates in the metaphysics of science, especially after Ladyman and Ross’ vigorous defence of naturalized metaphysics (Ladyman and Ross 2007). My talk will ask whether Super-Humeanism, which is a doctrine that posits only permanent matter points and distance relations in the fundamental ontology, is a naturalized metaphysical theory. While its proponents claim so (Esfeld and Deckert 2018) on the grounds that all the empirical evidence ultimately reduces to relative particle positions and their change, Wilson (2018) argues that it is an a prioristic and insufficiently naturalistic theory. On the one hand, Esfeld and Deckert (2018) have already successfully shown that Super-Humeanism is compatible with our most successful physical theories. On the other hand, demonstrating such compatibility is not enough to reach the conclusion that Super-Humeanism is a naturalized metaphysical theory, especially since its advocated ontology diverges from a standard reading of those theories. In my talk, I will show that this debate is surely difficult to settle if we appeal to general methodological principles. Indeed, the dependence of naturalized metaphysics on science can be spelled out in very different ways. For instance, while Ney (2012) evokes a neo-positivist attitude, Esfeld and Deckert (2018) openly reject this approach; at the same time, on the one side Allen (2012) presupposes that naturalized metaphysics shares the same methodology with scientific disciplines and Ladyman and Ross (2007) regard it as driven by science, on the other side, Morganti and Tahko (2017) argue for a moderately naturalistic metaphysics which is only indirectly connected to scientific practice. Given the difficulty of settling the debate by appealing to general principles, in order to evaluate whether Super-Humeanism is a naturalized metaphysics theory I will examine specific cases which show how Super-Humeanism is implemented in physical theories. One of them will touch a central notion of Super-Humeanism which is the impenetrability of particles; in particular, I will discuss whether the commitment to impenetrable particles can be justified ‘naturalistically’, within a Super-Humean framework, which rejects any natural necessity in the fundamental ontology.
Reference list:
Allen, S. R. (2012). What matters in (naturalized) metaphysics?. Essays in Philosophy, 13(1), 13.
Esfeld, M., & Deckert, D. A. (2017). A minimalist ontology of the natural world. Routledge.
Ladyman, J., Ross, D., Spurrett, D., & Collier, J. (2007). Everything must go: metaphysics naturalised (pp. 1-368).
Morganti, M., & Tahko, T. E. (2017). Moderately naturalistic metaphysics. Synthese, 194(7), 2557-2580.
Ney, A. (2012). Neo-positivist metaphysics. Philosophical Studies, 160(1), 53-78.
Wilson, A. (2018). Super-Humeanism: insufficiently naturalistic and insufficiently explanatory. Metascience, 1-5.
ABSTRACT. The structure of any physical theory includes representations about metaphysical essences. Examples of such existence are the absolute space and the absolute time in Newton's mechanics, the perpetual mobile in thermodynamics, absolute standards in theories of calibration fields of Hermann Weyl, the velocity of light in the special theory of relativity by A. Einstein, etc. These representations cannot be interpreted as real physical objects. At the same time they are necessary for interpretation of real physical phenomena.
In my report I analyze details of Hermann Weyl’s theory of gauge fields which is used wildly in modern theories of elementary particles. The idea of gauge changes allows to give geometrical description of physical forces. The concept of gauge invariant entered by Hermann Weyl assumes existence in the gauge space the ideal standards of measurement. Weyl's ideal standards are absurd things. They cannot be defined as real physical objects. At the same time without the assumption of ontological status of a set of ideal standards identical each other it is impossible to speak about existence of gauge transformation of real physical quantities. Ideal standards give physical sense to gauge transformations and allow to prove gauge invariance of physical laws. Thus, we can speak about ideal standards only as super-physical (or metaphysical) essences.
I suggest to interpret these metaphysical essence as transcendental. I. Kant gives dual treatment of transcendental. According to Kant transcendental there is both a deep force into a subject, hidden for him, and a certain unconditional existence out of a subject, defining its individual conscious acts. Research of the nature of these transcendental existences which cannot be defined neither as something subjective nor as something objective is very perspective.
In my concept transcendental existence is a product of invention of symbolical object which corresponds logically "empty" concept, and it is characterized by sign and meaning coincidence. Such transcendental existences are an important metaphysical component of any fundamental physical theory.
Linton Wang (Department of Philosophy, National Chung Cheng University, Taiwan) Ming-Yuan Hsiao (Department of Philosophy, Soochow University, Taiwan) Jhih-Hao Jhang (Department of Philosophy, National Chung Cheng University, Taiwan)
ABSTRACT. Popper's falsifiability criterion requires sciences to generate falsifiable predictions, and failing the criterion is taken as a vice of a theory. Kitcher (1982) rejects the criterion by arguing that there is no predictive failure to be derived from scientific theories in Popper's sense, and thus Popper-style falsification of a scientific theory cannot be fulfilled. We aim at reconstructing Kitcher's argument on the unfalsifiability of a scientific theory based on the defeasibility of scientific inferences, and further indicate how the unfalsifiability can aid scientific discovery. The reconstruction shows that the unfalsifiability of a scientific theory is a virtue rather than a vice of the theory.
Our main argument proceeds as follows. First, we reorganized Kitcher's (1982: 42) argument for the unfalsifiability of scientific theories, and indicate that his main argument is based on that no theory can logically derive a conditional prediction in the form of "if P then Q", since any such conditional is incompatible with the scientific practice that, in case of P but not Q, we can always appeal to some extra intervening factor (e.g. some other force than those under consideration) to explain why it is the case that P and not Q.
The second step is to indicate that Kitcher's argument is unsatisfactory, since a conditional in the form of "if P then Q" is incompatible with the practice of appealing to extra intervening factors only if the conditional allows the inference pattern modus ponens (e.g. the material implication and the counterfactual conditional), that Q logically follows from that if P then Q and that P. But the literature has shown that not all sensible conditionals enjoy modus ponens. But the literature has shown that not all sensible conditionals enjoy modus ponens. Furthermore, Kitcher's argument is puzzling from two aspects: his argument makes it unclear how a theory can be used to generate a prediction, and it is also unclear how the appealing to extra intervening factors works in scientific practice.
Finally, to respond to the two puzzles but still hold the main objective of Kitcher's argument, we propose that a conditional prediction to follow from a theory is the defeasible conditional "if P then defeasibly Q", from which, given P, Q only defeasibly follows but not logically entailed. This proposal in turn is shown to be compatible with the the practice of appealing to extra intervening factors. We also formalize the practice of appealing to intervening factors by a defeasible inference pattern we call the inference from hidden factors, which represents a way how we can learn from mistaken predictions without falsifying scientific theories. Our proposal is defended by indicating that the defeasible inference patterns can capture good features of scientific reasoning, such as that we generate prediction from a theory based on our partial ignorance of all the relevant facts, and how we may change our prediction when evidence accumulates.
References: Kitcher, P. (1982), Abusing Science: The Case against Creationism, MIT Press.
15:45
Klodian Coko (University of Western Ontario, Canada)
Robustness, Invariance, and Multiple Determination
ABSTRACT. Robustness, Invariance, and Multiple Determination
ABSTRACT
Multiple determination is the epistemic strategy of using multiple independent empirical procedures to establish “the same” result. A classic example of multiple determination is Jean Perrin’s description of thirteen different procedures to determine Avogadro’s number (the number of molecules in a gram-mole of a substance), at the beginning of the twentieth century (Perrin 1909, 1913). In the contemporary literature in philosophy of science, multiple determination is considered to be a variant of robustness reasoning: 'experimental robustness', 'measurement robustness', or simply 'robustness', are the terms that are commonly used to refer to this strategy (Wimsatt 1981, 2007; Woodward 2006; Calcott 2011; Soler et al., eds. 2012).
In this paper, I argue that the strategy of using multiple independent procedures to establish “the same” result is not a variant of robustness. There are many variants of robustness strategies, but multiple determination is not one of them. I claim that treating multiple determination strategy as a robustness variant mischaracterizes its structure and it is not helpful for understanding its epistemic role and import in scientific research. I argue that there are many features that distinguish multiple determination from the many robustness variants. I present these features and argue that they are related to the same central difference: whereas all the robustness variants can be construed as involving some sort of invariance (of the robust result) to different types of perturbations, multiple determination cannot be so construed. The distinguishing feature of the multiple determination strategy is its ability to support a specific type of a no-coincidence argument. Namely that it would be an improbable coincidence for multiple determination procedures, independently of one another, to establish “the same” result, and yet for the result to be incorrect or an artefact of the determination procedures. Under specific conditions, the no-coincidence argument from multiple determination - in addition to being used to argue for the validity of the result - can be also used to argue for the validity of the determination procedures. No such no-coincidence argument can be construed from simple invariance to perturbations. Robustness is a set of epistemic strategies better suited for discovering causal relations and dependencies.
Finally, I claim that, besides the philosophical reasons, there are also historical reasons to keep multiple determination and robustness distinct. Μultiple determination can be considered to be the historical descendant of William Whewell’s nineteenth century notion of consilience of inductions (a form of hypothetico-deductive reasoning). On the other hand, robustness strategies can be considered to be the descendants of John S. Mill’s nineteenth century methods of direct induction (a form of inductive reasoning).
REFERENCES
Calcott, Brett. 2011. “Wimsatt and the Robustness Family: Review of Wimsatt’s Re-engineering Philosophy for Limited Beings.” Biology and Philosophy 26:281-293.
Perrin, Jean. 1909. “Mouvement Brownien et Réalité Moléculaire.” Annales de Chimie et de Physique 18:1-114.
Perrin, Jean. 1913. Les Atomes. Paris: Librarie Félix Alcan.
Soler, Léna et al. eds. 2012. Characterizing the Robustness of Science: After the Practice Turn in Philosophy of Science. Boston Studies in the Philosophy of Science, Vol.292, Dordrecht: Springer.
Wimsatt, William. 1981. “Robustness, Reliability, and Overdetermination.” In Scientific Inquiry and the Social Sciences, ed. Marylin Brewer and Barry Collins, 124-163. San Francisco: Jossey-Bass.
Wimsatt, William. 2007. Re-engineering Philosophy for Limited Beings. Cambridge: Harvard University Press.
Woodward, James. 2006. “Some Varieties of Robustness.” Journal of Economic Methodology 13:219-240.
Structuralist Abstraction and Group-Theoretic Practice
ABSTRACT. Mathematical structuralism is a family of views holding that mathematics is primarily the study of structures. Besides genuine philosophical reasons to adopt a structuralist philosophy, the "structural turn" of the late 19th and early 20th centuries' mathematics itself can motivate the adoption of structuralist views. For example, the philosophical notion of (structuralist) "abstraction"—a transformation from the concrete/particular/intuitive to the abstract/universal/formal—is often said to be rooted in historical events, such as:
1) F. Klein’s Erlangen Program of 1872.
2) The axiomatic definition of groups of 1882.
3) The invention and use of category theory, starting from 1942.
The focus on these particular demonstrations of historical abstraction processes does not exclude, of course, the possibility of alternative interpretations of structuralist abstraction, in philosophy or the history of mathematics. The questions are therefore the following:
What other structuralist abstraction process exist in the history of mathematics? What explanatory power on abstraction principles can they bear?
In its unrestricted form, the question will allow for an enormous amount of possible answers, but one does not have to depart from the original examples as much as it first seems: To each of the above examples, my suggestion is, there exist closely related historical evidences for rather different interpretations of the alleged "structuralist turn" n mathematics. Further, these new examples will all stem from the theory of groups, which will make comparison easier:
1*) F. Klein’s "Hypergalois Program": While Klein’s aim in the Erlangen Program was to organize several geometries under the unifying approach of algebra, the "Hypergalois Program" promotes means to tackle genuine algebraic problems with geometric means and thus constitutes a forerunner of the theory of representations. This meets Klein’s own aspiration not to subsume geometry under algebra, but to combine the tools and methods of both theories.
2*) If the axiomatic definition of a group ignores the internal nature of its elements, why not climbing further on the "ladder of abstraction" and ignore their existence altogether—focusing solely on sub- and superrelations of algebraic structures like groups? Such ideas can be traced back to E. Noether (and arguably to Dedekind), but only in 1935/1936 O. Ore formalized such a "foundation of abstract algebra". His ideas marked the birth of the theory of lattices that brought new insights into abstract algebra, the foundational claim, however, failed and was soon abandoned.
3*) A more successful unification of algebraic theories was laid down in category theory; a theory that was nevertheless invented for much more modest and particular goals. Here, the focus is laid to the question how newly emerging category theory was thought to provide means for the most urging problems of group theory, namely those of homology and cohomology.
A tentative lesson drawn from these new examples could be not to regard structuralist abstraction as an imperative of modern mathematics that is applied for its own sake, but rather as an additional mathematical tool that (together its counterpart of "instantiation") modern mathematicians are free to use to serve their particular purposes.
ABSTRACT. Expressive power and intensional operators
In his book Entities and Indices, M. J. Cresswell advocated the view that modal discourse in natural language requires a semantics whose power amounts to explicit quantification over possible worlds. His argument for this view had two parts. First, he argued that an operator-based intensional language can only reach the expressive power of English if it has a semantics with infinite world indices and employs infinite actuallyn/Refn-operators which store and retrieve such indices. (See Cresswell 1990, ch. 3, esp. pp. 40–46. Cresswell’s actuallyn/Refn-operators are modal correlates of the Kn/Rn-operators introduced in the appendix of Vlach 1973 (pp. 183–185).)
Second, he gave a formal proof that an operator-based language of this kind is as powerful as a language of predicate logic with full quantification over worlds (see Cresswell 1990, ch. 4). Cresswell also suggested that possible worlds are semantically on a par with times and argued that temporal discourse in natural language has the power of full quantification over times.
In a recent paper, I. Yanovich (2015) argues that a first-order modal language equipped with standard modal operators and Cresswell’s actuallyn/Refn-operators—following Saarinen (1978), Yanovich calls the latter backwards-looking operators—is less powerful than an extensional first-order language with full quantification over worlds or times. Yanovich suggests that the widespread belief to the contrary among philosophers and linguists is the result of a misinterpretation of Cresswell’s formal proof. He observes that Cresswell’s proof assumes that the modal language with backwards-looking operators also employs an operator of universal modality ☐, which adds extra expressive power to the language.
One important drawback of Cresswell’s and Yanovich’s discussions is they do not offer a precise definition of the notion of expressive power. Without such a definition, it is not thoroughly clear what criterion they are adopting in order to compare the power of different logical systems. In this paper, we provide a precise definition of expressive power—based on Ebbinghauss’s notion of logical strength (see Ebbinghauss 1985, sec. 3.1)—that can be applied to the formal languages discussed by Cresswell in Entities and Indices. Armed with this definition, we address the question of whether a modal language equipped with Cresswell’s actuallyn/Refn-operators and an operator of universal modality is as powerful as a first-order language with full quantification over worlds or times.
References
1. Cresswell, M. J.: Entities and Indices. Kluwer Academic Publishers, Dordrecht (1990)
2. Ebbinghauss, H.-D.: Extended Logics: The General Framework. In: Barwise, J., Feferman, S. (eds.) Model-Theoretic Logics. Spinger-Verlag, New York, 25--76 (1985)
4. Saarinen, E.: Backward-Looking Operators in Tense Logic and in Natural Language. In: Saarinen, E. (ed.) Game-theoretical semantics: Essays on Semantics by Hintikka, Carlson, Peacoke, Rantala, and Saarinen. D. Reidel, Dordrecht, 215--244 (1978)
5. Vlach, F.: ‘Now’ and ‘Then’: A Formal Study in the Logic of Tense Anaphora. Ph.D. dissertation, UCLA (1973)
6. Yanovich, I.: Expressive Power of “Now” and “Then” Operators. Journal of Logic, Language and Information 24, 65--93 (2015)
Transcriptions of Gottlob Frege’s Logical Formulas into Boole’s algebra and Language of Modern Logic. Similarities and Differences
ABSTRACT. My aim is to rewrite some of Frege’s important formulas from his Begriffsschrift (1879) into other logical languages in order to do a comparison and stress some differences between the languages. I refer to natural language expressions if they were given by Frege.
The languages into which I do the translation are the following:
1. G. Boole’s algebra language presented in Laws of Thought (1854).
2. E. Schröder’s logical language: “in a form modelled upon Leibnizian-Boole’s calculus” as Schröder wrote.
3. The language of modern logic.
As an introduction I make some historical remarks about the relationship Frege – Boole and present a short description of logic understood as a calculus (Boole, Schröder) and logic as a language (Frege).
I posed the following questions:
1. Which of the logical transcriptions is the closest to Frege’s verbal expression related to a particular formula?
2. Is there a really proper translation of Frege’s logical notation into Boole’s algebra, Schröder’s logic, or the language of modern logic?
3. Is Frege’s logic more understandable when it is expressed in Boole’s algebra, in Schröder’s logic or in the language of modern logic?
4. Isn’t there a risk that the very precise language, which was meant to ensure extreme formalisation of reasoning, can lead to esoteric knowledge for a small group of specialists, and consequently discourage potentially interested people?
Although Frege wrote two papers comparing Boole’s algebra and his logic, Frege didn’t do a transformation of his formulas into Boole’s algebra logic. Schröder was inspired by Boole’s algebra and invented his own algebra of logic. Schröder transformed some of Frege’s formulas into his language. I am trying to continue Schröder’s project to rewrite some of Frege’s important formulas into other logical languages.
Frege’s notation is two-dimensional as opposed to Boole’s algebra, Schröder’s logical language and the language of modern logic which are linear notations. In Frege’s logic there is a very important distinction between assertion (that is, the judgement-stroke) and predication (that is, the content stroke). It is not possible to express the same in Boole’s algebra, in Schröder’s logical language or in the language of modern logic. Frege’s sentence “it is judged as true” is replaced by the name “tautology” and called 1. There are different Frege’s verbal descriptions of this law, as tautology and countertautology. However, they express the same thought in different ways. In notation used by the language of modern logic with an implication it is necessary to use brackets.
Boole G., Laws of Thought (1854).
Frege G., Begriffsschrift (1879).
Frege G., Booles logische Formelsprache und meine Begriffsschrift [1882].
Frege G., Booles Rechnende Logik und die Begriffsschrift [1880/1881].
Peckhaus V., Logik, Mathesis universalis und allgemeine Wissenschaft (1997).
Schröder E., Der Operationskreis des Logikkalkuls (1877).
Schröder E., Vorlesungen über die Algebra (1890).
Sluga H., Frege against the Boole’s, (1987).
Hilbert and the Quantum Leap from Modern Logic to Mathematical Logic
ABSTRACT. The leap to Mathematical logic is demonstrated by tracking a paradox posed in the Prior Analytics. Aristotle poses three problematic valid syllogisms which are not in any valid forms:
I. II. III.
W = K. S is of Q. S = N.
W is of G. G = Q. S is of G .
K is of G. S is of G. Some N is of G. (PA, 48a40–48b24)
Syllogistic logic does not capture either ‘=’ or ‘of’.
Leibniz comes close to the solution by formulating these arguments as hypothetical syllogisms but cannot produce valid syllogisms as he cannot invert the major and minor premises. Lenzen reformulates Leibniz’s theorem 15 set theoretically so we get:
I. II. III.
KW WG KG SQ QG SG NS SG NG (Lenzen, p. 15)
Boole suggests the shift from the logic of classes of syllogisms to the conditional logic of propositions and we can come close to seeing these as valid hypothetical syllogisms, yet Boole had not axiomatized propositional logic. Frege depicts these syllogisms as valid instances of theorems involving conditionals as well as quantifiers.
In Hilbert and Ackerman, first Aristotle’s three syllogisms are translated into mathematical logic by a schema for translating categorical propositions into mathematical logic. Then, the arguments are put in Skolem Normal Form (SNF):
___ ___ ____
I. (E g)(Ej)(Ek)(Ew)(Ex)(h)(u){[w(x)k(x)][w(x)k(x)]{[u(g)]}[j(h)].
____ ___ ___
II. (Ef)(Eg)(Eq)(Er)(Et)(h)(s){[s(q)][f(g)f(r)][f(g)f(r)] [t(h)].
___ ___ ____
III. (Eh)(Em)(En)(Es)(Ex)(j)(t){[s(x)n(x)][s(x)n(x)][t(h)][m(j)]}.
Their validity can be demonstrated within the extended predicate calculus axiomatic system and the independence, consistency and completeness of the calculus is also proven. Hence, we see how the evolution of logic from Aristotle to Leibniz to Boole to Frege to Hilbert provides a complete and comprehensive solution to the problem of problematic syllogisms.
References
Aristotle, Prior Analytics, translated by A. J. Jenkinson, in Jonathan Barnes (ed.), The Collected Works of Aristotle, Volume One, pp. 39–113.
Boole, George. The mathematical analysis of logic: being an essay towards a calculus of deductive reasoning. Macmillan, Barclay and Macmillan, Cambridge (1847)
Frege, Gottlob: BEGRIFFSSCHRIFT, EINE DER ARITHMETISCHEN NACHGEBILDETE FORMELSPRACHE DES REINEN DENKENS. Verlag von Louis Nebert, Halle (1879)
Hilbert, David and Wilhelm Ackermann. 1938. Principles of Mathematical Logic. (2nd edition). Translated by Lewis M. Hammond, George G. Leckie, F. Steinhardt; edited with notes by Robert E. Luce. Original publisher Julius Springer, 1938; Chaelsea Publishing Company 1958, reprinted by American Mathematical Society 2008. Providence, Rhode Island: American Mathematical Society.
Kneale, William and Martha Kneele. 1962. The Development of Logic. Oxford: Clarendon Press.
Leibniz, Gottfried. 1984. ‘Definitiones Logicae’, in Opera Philosophica Quae Exastant Latina, Gallica, Gemanica Omnia, edited by Joannes Edurardus Erdmann. Borussicae: Acadimae Scientiarum Regiae, Berolini: Sumtibus G. Eichleri. 1840. Observations and Commentary by G. Eichler.
Lenzen, Wolfgang. 2004. ‘Leibniz’s Logic’ in Dov Gabbay and John Woods (eds), Handbook of the History of Logic, Volume 3 –The Rise of Modern Logic: From Leibniz to Frege. Amsterdam: Elsevier, pp. 1–84.
****Please note that the symbols did not get copy and pasted properly. There are also problems with formatting as I have put the three problematic syllogisms in columns. I do not see any option of attaching my document.
Hsing-Chien Tsai (Department of Philosophy, National Chung-Cheng University, Taiwan)
Classifying First-order Mereological Structures
ABSTRACT. Mereology is the theory (or a class of theories) based on the relation “being a part of”. The following are some first-order mereological axioms which can be found in the literature. Three most basic axioms are reflexivity, anti-symmetry and transitivity, that is to say, any mereological structure must be a partial ordering. Strong supplementation says that if x is not a part of y, then x has a part z which does not overlap y, where overlapping means sharing at least one part. Unrestricted fusion, which is an axiom schema, says that for any formula which defines (perhaps with some parameters) a nonempty subset of the domain, there is a member which is the least upper bound of that subset. The theory generated by the said axioms is called General Extensional Mereology (GEM). Intuitively, mereological structures can be classified into three mutually disjoint subclasses: atomic, atomless and mixed. The mixed can be further classified into infinitely many mutually disjoint subclasses according to the numbers of atoms, since any mixed model must either have exactly k atoms, for some k>0, or have infinitely many atoms. For any first-order axiomatizable mereological theory, each of the said subclasses of its models is axiomatizable (note that “atom”, “atomic”, “atomless” and any finite cardinality are first-order expressible and that infinity can be expressed by a list of infinitely many first-order axioms each of which says that there are at least k members, where k>1). Previously, it has been shown that for GEM, each of the said subclasses of its models is axiomatizable by a complete theory [1]. We can make use of this result to show the decidability of any first-order axiomatizable mereological theory which is at least as strong as GEM. The models of a weaker theory can also be classified in the same way. For instance, consider the following two axioms: (a) finite fusion: any definable (perhaps with some parameters) nonempty finite subset has a least upper bound and (b) complementation: for any member such that some member is not its part, there is a member disjoint from that member such that their least upper bound is the greatest member. It can be shown that we can get a weaker theory by substituting (a) and (b) for unrestricted fusion. Then for such a weakened theory, the subclass of mixed models each of whose members has infinitely many atoms will have two models which are not elementarily equivalent. However, such a subclass can be again divided into two disjoint subclasses either of which is axiomatizable by a complete theory, and from this, we can again show the decidability of the weakened theory. It is interesting to apply the same procedure to an even weaker theory and see what subclass contains models which are not elementarily equivalent and whether there is a definable property to separate those models.
[1] Tsai, Hsing-chien, General Extensional Mereology is Finitely Axiomatizable, Studia Logica, 106(4): 809-826 (2018).
A class of languages for Prawitz's epistemic grounding
ABSTRACT. With his theory of grounds (ToG), and by means of an innovative approach to the notion of (valid) inference, Prawitz aims to explain the epistemic compulsion exerted by proofs. ToG allows for some advancements with respect to Prawitz's earlier proof-theoretic semantics (PTS). Also, it provides a framework for epistemic grounding.
The philosophical tenet is that evidence is to be explained in terms of a notion of ground inspired by BHK semantics. Grounds involve constructive functions, defined by equations reminding of procedures for full-evaluation of PTS valid arguments. Inferences are just applications of functions of this kind.
From a formal standpoint, Prawitz suggests instead that functions are to be described in kind of λ-languages. Because of the semantic role that these languages are to play, however, they must be conceived of as indefinitely open to the addition of new functions. Deduction uses no fixed set of inferences and, more strongly, Gödel's incompleteness states that such a set cannot exist.
In our talk, we focus on the formal part of ToG, and sketch a class of languages and properties of such languages for first-order epistemic grounding.
First, we define a language of types, and the notion of atomic base over such types. Types are used to label terms in languages of grounding, whilst the atomic bases fix, through the atomic rules they involve, the “meaning” of the types.
Second, we single out a class of total constructive functions from and to grounds over a base. Also, we define composition of functions for grounds. Finally, we give clauses that determine the notion of ground over a base. These are the inhabitants of our “semantic universe”, and basically involve generalized elimination rules up to order 2.
Third, we define a class of languages of grounding over an atomic base, as well as the notion of expansion of these languages.
We then define denotation from languages of grounding to the universe of functions and grounds. We prove a denotation theorem, showing that each term in a language of grounding denotes a function or a ground.
We finally classify languages of grounding by pinpointing the following properties:
- closure under canonical form, i.e., each term is denotationally equal to one, in the same language, which is in canonical form – we show, in two different ways, depending on whether denotation is fixed or variant, that each language of grounding can be expanded to one closed under canonical form;
- primitiveness, depending on whether the language expands a given one by adding new non primitive or primitive operations;
- conservativity, depending on whether the language expands a given one by enlarging or not the class of denoted functions and grounds – we show that re-writability of function symbols implies conservativity, and that primitiveness and conservativity are not equivalent notions;
- universal denotation, i.e. a syntactic structure is admissible in a language of grounding over any base – we provide necessary and sufficient conditions for universal denotation, and its limits with respect to the "semantic universe".
Irina Starikova (Higher School of Economics, Moscow, Russia)
"Thought Experiments" in Mathematics?
ABSTRACT. The philosophy of thought experiments (TEs) focus mostly on philosophy of science. Few (e.g. Jim Brown, Irina Starikova and Marcus Giaquinto, Jean-Paul Van Bendegem) tackle the question of TEs in mathematics, where, as they point out, it is not at all strange to see our knowledge increase from thought alone.
Jean-Paul Van Bendegem, former opponent of TE in mathematics in (1998), now accepts and even defends the possibility of (specific types) mathematical experiments (2003), by showing several cases in which a highly abstract mathematical result is the outcome of research that has a concrete empirical origin. In recent discussion (PhilMath Intersem 2018, Paris) he draws the conclusion that mathematical experiments are indeed a genuine possibility, but that there is unlikely to be a uniform definition of “thought experiment”, given the heterogeneity of cases in mathematics.
Starikova and Giaquinto, being also against use of the label ‘thought experiment’, narrow mathematical TEs to the most interesting creative candidates from mathematical practice, namely, those in which (a) experimental thinking goes beyond the application of mathematically prescribed rules, and (b) uses sensory imagination (as a way to drawing on the benefits of past sensory experience) to grasp and mentally transform visualizable representations of abstract mathematical objects, such as knots, graphs and groups. Van Bendegem taking into account that the thinking in these examples is primarily aimed at a better understanding and more detailed consideration of abstract constructions than at real experimentation and suggests avoiding the use of the term “thought experiment” as applied to mathematical practice and replacing it with “an aid to proof” or “evidential mediators”.
Bearing in mind that TEs are a rare combination of historical, philosophical, cognitive and social practicies and a very special way of extracting new knowledge, before sweeping away ‘thought experiments’ from mathematical practice I will discuss:
a. Do TEs under conditions (a) and (b) constitute a class we can properly define?
b. Can we define other classes of mathematical thinking practices which may be reasonably be counted as TEs?
c. Should we aim to find a uniform general definition of TEs in mathematics, one which includes various subkinds but excludes anything not accurately described as a TE? Or should we take the notion of TE to be a loose family-resemblance concept, with no proper definition?
Brown, James R., 2007, “Thought experiments in science, philosophy, and mathematics”, Croatian Journal of Philosophy, VII: 3–27.
–––, 2011, “On Mathematical Thought Experiments”, Epistemologia: Italian Journal for Philosophy of Science, XXXIV: 61–88.
Van Bendegem, Jean Paul, 1998, “What, if anything, is an experiment in mathe¬matics?”, in Dionysios Anapolitanos, Aristi¬des Baltas & Stavroula Tsinorema (eds.), Philosophy and the Many Faces of Science, (CPS Pu¬blications in the Philosophy of Science). London: Rowman & Littlefield, pp. 172-182.
–––, 2003, “Thought Experiments in Mathematics: Anything But Proof”. Philosophica, 72, (date of publication: 2005), 9-33.
Starikova, Irina & Giaquinto, Marcus, 2018, “Thought Experiments in Mathematics”, in Michael T. Stuart, Yiftach Fehige & James Robert Brown (eds.), The Routledge Companion to Thought Experiments. London: Routledge, pp. 257-278.
Messy metaphysics: the individuation of parts in lichenology
ABSTRACT. Lichens are defined as symbiotic systems that include a fungus (mycobiont) and a photosynthetic partner (photobiont), such as algae or cyanobacteria. The standard view has been that lichens are systems that have one fungus—typically an Ascomycete or Basidiomycete. Although other fungi are known to be parts of the lichen (in a less functional or evolutionarily impactful role), the classical view of lichen composition of mycobiont-photobiont has been widely accepted. This bipartite view suggests that the criteria for lichen stability is the presence of the same mycobiont in the lichen system and underpins classificatory practices that rely on the fungus to name lichens.
But this one-lichen, one-fungus metaphysics ignores relevant alternatives. Recent discoveries show that some lichens are composed of three rather than two symbiotic parts (Chagnon et al 2016). The metaphysical concept of the lichen and what are considered to be its parts determines how lichens are individuated and how they are named and tracked over time. Naming the lichen symbiont relies on capturing its parts but also on the means by which we attribute parthood. If we say that something is a part of something else, reference to its parthood is typically thought to be metaphysically grounded (e.g. its parthood is due to a particular relationship of composition, kind membership, or inheritance), or, saying that something is a part might be indicative of our understanding of its role in a process (e.g. which entities are involved in a pathway’s functioning over time). Brett Calcott (2009) suggests that parts may play different epistemic roles depending on how they are used in lineage explanations and for what purpose parthood is attributed to them. A part may be identified by the functional role it plays as a component of a biological process. Or, parts-talk may serve to indicate continuity of a phenomenon over time, despite changes between stages, such that one can identify it as the same at time T1 as at time T2. I employ Calcott’s account of the dual role of parts to shed light on the messy individuation activities, partitioning of the lichen symbiont, and criteria of identity used in lichenology. I use this case to explore what grounds we have for relying on different ontologies, what commitments we rely upon for our classifying practices, and how reliable taxonomic information can be gleaned from these messy individuation practices. I show how ontological messiness may be both (i) problematic in making it difficult to count biological individuals or track lineages, and (ii) useful in capturing the divergent modes of persistence and routes of inheritance in symbionts.
References
Calcott, B. (2009). Lineage Explanations: Explaining How Biological Mechanisms Change. British Journal for the Philosophy of Science 60: 51–78
Chagnon, P-L, U'Ren, J.M., Miadlikowska, J., Lutzoni, F., and Arnold, A.E. (2016). Interaction type influences ecological network structure more than local abiotic conditions: evidence from endophytic and endolichenic fungi at a continental scale. Oecologia 180 (1): 181-191.
17:15
Bettina Dietz (Hong Kong Baptist University, Hong Kong)
Tinkering with nomenclature. Textual engineering, co-authorship, and collaborative publishing in eighteenth-century botany
ABSTRACT. This paper explores how the messiness of eighteenth-century botanical practice that resulted from a constant lack of information generated a culture of collaborative publishing. Given the amount of information required for an accurate plant description let alone a taxonomic attribution, eighteenth-century botanists and their readers were fully aware of the preliminary nature of their publications. They openly acknowledged the necessity of updating them in the near future, and developed collaborative strategies for doing so efficiently. One of these was to make new material available to the botanical community as quickly as possible in a first printed version, while leaving the process of completing and correcting it to be undertaken by stages at a later date. Authors updated their own writings in cycles of iterative publishing, most famously Carl Linnaeus, but this could also be done by others – in the context of this paper by the consecutive editors of the unpublished papers of the German botanist Paul Hermann (1646-1695) who became his co-authors in the process.
Hermann had spent several years in Ceylon as a medical officer of the Dutch V.O.C. before he returned to the Netherlands in 1680 with an abundant collection of plants and notes. When he died almost all of this material, eagerly awaited by the botanical community, was still unpublished. As the information economy of botany, by then a discipline aiming for the global registration and classification of plants, tried to prevent the loss of precious data, two botanists – William Sherard (1650-1728) and Johannes Burman (1706-1779) - consecutively took on the task of ordering, updating, and publishing Hermann’s manuscripts. The main goal of these cycles of iterative publishing was, on the one hand, to add relevant plants and, on the other, to identify, augment, and correct synonyms – different names that various authors had given to the same plant over time. As synonyms often could not be identified unambiguously, they had to be adjusted repeatedly, and additional synonyms, which would, in turn, require revision in the course of time, had to be inserted. The process of posthumously publishing botanical manuscripts provides insights into the successive cycles of accumulating and re-organizing information that had to be gone through. As a result, synonyms were networked names that were co-authored by the botanical community. Co-authorship and a culture of collaborative publishing compensated for the messiness of botanical practice.
ABSTRACT. Although conventional wisdom still holds that it is foolhardy to infer causal relationships from non-experimental data, the last two decades have seen the development of methods that are proven to do what was previously thought impossible. Typical methods proceed by a simplicity-guided schedule of
conjectures and refutations. Simpler models posit fewer causal relationships among variables. Algorithms infer the presence of causal relationships by iteratively refuting sharply-testable statistical hypothesis of conditional independence. Such methods are routinely proven to converge to the true equivalence class of causal models as data accumulate. Crucially, however, that convergence is merely pointwise — it is not possible to calculate ex-ante the amount of data necessary to have reasonable assurance that the output model is approximately correct. Furthermore, there are infinitely many alternative methods that would have similar limiting performance, but make drastically different conclusions on finite samples. Some of these methods reverse the usual preference for simple graphs for arbitrarily many sample sizes.
What justifies the seemingly reasonable procedures that are prominent in the literature? Some have responded to the dilemma by searching for stronger a priori assumptions that would guarantee that the search methods converge uniformly [Zhang and Spirtes, 2002]. But these assumptions are implausible, and amount to insisting that causal discovery is easier than it really is [Uhler et al., 2013]. What is needed is a success criterion for justifying causal discovery that is stronger than mere pointwise convergence, but does not insist on short-run bounds on the chance of error.
Say that a method is *progressive* if, no matter which theory is true, the objective chance that the method outputs the true theory is strictly increasing with sample size. In other words: the more data the scientist collects, the more likely their method is to output the true theory. Although progressiveness is not always feasible, it should be our regulative ideal. Say that a method is α-progressive if, no matter which theory is true, the chance that it outputs the true theory never decreases by more than α as the sample size grows. This property ensures that collecting more data cannot set your method back too badly. We prove that, for many problems, including the problem of causal search, there exists an α-progressive method for every α > 0. Furthermore, every α-progressive method must proceed by systematically preferring simpler theories to complex ones. That recovers and justifies the usual bias towards sparse graphs in causal search.
ABSTRACT. Standard methods of causal discovery take as input a statistical data set of measurements of well-defined causal variables. The goal is then to determine the causal relations among these variables. But how are these causal variables identified or constructed in the first place? In "Causal Inference of Ambiguous Manipulations ", Spirtes and Scheines (2004) show how mis-specifications of the causal variables can lead to incorrect conclusions about the causal relations. These results suggest that there is a privileged level of description for a causal relation, or at least, that not all levels of description are correct. It is tempting to conclude that the correct level of causal description is then at the finest possible level of description. But apart from the challenge that extant accounts of causality do not fit well with our finest level of description of physical processes, such a conclusion would also imply that the discussion of causation at the macro-level is at best elliptic. In this presentation we propose an account of a "correct" causal description that retains a meaningful notion of causality at the macro-level, that avoids the types of mis-specifications illustrated by Spirtes and Scheines, and that (in general) still permits a choice of the level of description. In this regard it offers a middle route between realist accounts of causation that maintain that there is an objective fact to the matter of what is "doing the causal work", and pragmatic accounts of causation that reject the notion of some privileged level of causal description. As the title suggests, the proposal is closely related to notions of the "proportionality" of causes in the literature on the metaphysics of causation, and considerations of the "specificity" of variables as used in information theory.
Finding causation in time: background assumptions for dynamical systems
ABSTRACT. Much of scientific practice is concerned with the identification, prediction, and control of dynamical systems -- systems whose states change through time. Though typically modeled with differential equations, most are presumed to involve causal relations, with the changing states of some variables driving change in others over time, often reciprocally. A variety of new tools have been introduced outside of the causal Bayes net framework to learn aspects of this structure from noisy, real-world time series data.
One collection of such tools is built around the notion of a dynamical symmetry [1]. This is a transformation of the state of a system that commutes with its time evolution. The set of dynamical symmetries of a system, along with the algebraic structure of those symmetries under composition, picks out an equivalence class of causal structures in the interventionist sense. This means that a test for sameness of dynamical kind can be used as a tool for validating causal models or determining whether observed systems fail to share a common causal structure. Algorithms have been published for implementing such tests, but they apply only to deterministic systems [2,3].
Here we present a generalization of the theory of dynamical kinds to the case of stochastic systems where the target question is not whether two systems differ in their causal structure, but to what degree. This requires addressing a variety of interrelated problems. First, though distinct from the causal graph approach, any inference from data concerning dynamical symmetries must deploy similar assumptions about the relation between the statistics of sampled variables and causal relations on the one hand, and about the causal relations themselves on the other. Chief among these is the assumption of Stochastic Dynamical Sufficiency (SDS). We clarify this condition, and draw out the ways in which it differs from the Causal Sufficiency condition in the Bayes net context.
Second, there is the question of how this sufficiency assumption interacts with the choice of variables. In typical real-world settings one is forced to use variables to describe a system that may aggregate over another set that obeys the SDS. Given that some set of variables meets the SDS condition, under what conditions can lossy transformations of these variables do so as well? How can the presence or absence of these conditions be verified? If a set of variables does not satisfy SDS, is it possible to determine whether a finer-grained description exists that does? How do the answers to these questions depend upon the kinds of interventions possible? For all of these questions, we present some preliminary answers, illustrated with ongoing applications to the study of collective behavior in animal swarms.
1. Jantzen, B. C. Projection, symmetry, and natural kinds. Synthese 192, 3617–3646 (2014).
2. Jantzen, B. C. Dynamical Kinds and their Discovery. Proceedings of the UAI 2016 Workshop on Causation: Foundation to Application (2017).
3. Roy, S. & Jantzen, B. Detecting causality using symmetry transformations. Chaos 28, 075305 (2018).
ABSTRACT. In "The Placebo Concept in Medicine and Psychiatry" Grünbaum offers a general account of the placebo effect as therapuetic improvement brought about by something other than the theory on which the therapy is based. I will review some criticisms of Grünbaum's account, which concentrate on its failure to consider expectation effects, and wonder how far the account can be adapted to take psychological effects into account.
ABSTRACT. Karl Kraus famously quipped that psychoanalysis is the disease for which it purports to be the cure. Adolf Grünbaum developed a more trenchant and detailed critique of psychoanalysis which to this day has not been adequately answered. This presentation will briefly summarize some key points of that critique and provide some anecdotes about how psychoanalysis cured Adolf and how Adolf cured others of psychoanalysis.
ABSTRACT. In the experimental sciences, researchers distinguish between real experiments and thought experiments (TEs), although it is questionable whether any sharp border can be drawn between them in modern physics. The distinction between experiment and TE is also relevant, mutatis mutandis, for certain situations in mathematics. In this paper, I begin by unpacking this thesis in the context of a pragmatic approach to mathematics. I then argue that, from this point of view, the distinction between experience, experiments, TEs and exercises of thinking in mathematics makes it possible that TEs exist in mathematics as, for example, explicative proofs. TEs can therefore advance mathematical understanding without requiring any new mathematical experiments. The proposed solution avoids the classical alternative between Brown’s Platonistic and Norton’s empiristic solutions to the well-known dilemma of the interpretation of TEs. Finally, I support my thesis with some mathematical examples.
Paul Hasselkuß (Heinrich-Heine Universität Düsseldorf, Germany)
Computers and the King’s New Clothes. Remarks on Two Objections against Computer Assisted Proofs in Mathematics
ABSTRACT. Computer-assisted proofs have produced outstanding results in mathematics. Surprisingly, these proofs are not well-received by the mathematical community, proofs ‚done by hand‘ are preferred over proofs where some deductive steps are executed by a computer. In my talk, I will analyse and question this behaviour. In the first part, building upon examples from the controversy on Appel and Haken’s (1989) proof of the four-color conjecture, I will argue that two often made objections against the acceptance of computer-assisted proofs are (1) that computer-assisted proofs are messy and error-prone, and (2) that they do not give us insight into why a theorem is true. While both objections may be correct descriptions regarding certain proofs, proponents of computer-computer assisted proofs have to defend stronger versions by claiming that, based on objection (1) or (2), computer-computer-assisted proofs should not be accepted as valid in mathematics. In the second part I will argue that the first objection may hold with regard to Appel and Haken’s initial proof, but not with regard to later improvements: Gonthier (2008) puts forward a fully formalized proof which is entirely executed by a standardized open-source theorem-proofer; this proof is neither messy nor error-prone. Dealing with the second objection, I will argue that it presupposes that a proof’s ‚insight‘ could be judged in an (at least) intersubjectively stable way. This assumption can be questioned on the basis of recent empirical studies of mathematicians’ proof appraisals by Inglis and Aberdein (2014, 2016) who were able to show that proof appraisals involve four different dimensions: aesthetic, intricacy, precision, and utility. If a quality Q can be represented as a linear combination of these four dimensions, mathematicians can be observed to disagree in their judgement of the same proof with regard to Q. Then, I will argue, based on data from the same survey, that the concept of ‚insight‘ presupposed by the second objection can be represented as a linear combination these factors, namely the aesthetic and the utility dimension and, thus, mathematicians judgements of a proof’s insight are subjective. This constitutes a major problem for the second objection: Proponents have to explain these subjective judgements and, in addition to that, also have to provide a new concept of insight that is close to the mathematical practice but which does not give rise to such subjective judgements.
Selected Sources
Appel, Kenneth I., and Wolfgang. Haken. 1989. Every Planar Map is Four Colorable. Providence, R.I.: American Mathematical Society.
Inglis, Matthew, and Andrew Aberdein. 2014. “Beauty is Not Simplicity: An Analysis of Mathematicians‘ Proof Appraisals.” Philosophia Mathematica 23 (1): 87–109.
Inglis, Matthew, and Andrew Aberdein. 2016. “Diversity in Proof Appraisal.” In Trends in the History of Science, edited by Brendan Larvor, 163–79. Cham: Springer.
Gonthier, Georges. 2008. “Formal Proof–the Four-Color Theorem.” Notices of the AMS 55 (11): 1382–93.
17:15
Andrei Rodin (Higher School of Economics, Moscow, and Russian Academy of Sciences, Institute of Philosophy, Russia)
Formal Proof-Verification and Mathematical Intuition: the Case of Univalent Foundations
ABSTRACT. The idea of formal proof verification in mathematics and elsewhere, which dates back to Leibniz, made important advances since the beginning of the last century when David Hilbert launched the research program called today after his name. In the beginning of the present century an original contribution into this area of research has been made by Vladimir Voevodsky (b. 1966 Moscow - 2017 Princeton) who proposed a novel foundations of mathematics that he called Univalent Foundations (UF). UF involves an interpretation of Constructive Type theory with dependent types due to Per Martin-Löf by means of Homotopy theory (called Homotopy Type theory or HoTT for short) and is designed to support an automated, viz. computer-assisted, verification of non-trivial mathematical proofs [1].
The present paper analyses the epistemic role of mathematical intuition in UF. The received view on the role of intuition in formalized mathematical theories stems from Hilbert. Hilbert stresses the importance of human capacity to distinguish between different symbol types, identify different tokens of the same symbol type and manipulate with symbols in various ways. He qualifies this cognitive capacity as a form of intuition and claims that it plays a major epistemic role in formal logic and formalized mathematical theories. All other forms of mathematical intuition, in Hilbert’s mature view, have significant heuristic and pedagogical values but play no role in the formal representation and justification of ready-made mathematical theories (see [2], section 3.4 and further references therein).
Unlike Hilbert Voevodsky didn’t write a philosophical prose but he expressed his vision of mathematics and explained motivations of his projects in many public talks, which are now available in record via his memorial web page maintained by Daniel Grayson in Princeton Institute of Advanced Studies [3]. In a number of these talks Voevodsky stresses the importance of preserving in the framework of formalized computer-assisted mathematics an “intimate connection between mathematics and the world of human intuition” . Using a simple example of classical theorem in the Algebraic Topology formalized in the UF setting [4], I show how a spatial intuition related to Homotopy theory serves here as an effective interface between the human mind and the computer, and argue that it plays in UF not only heuristic but also a justificatory role.
Bibliography:
1. D.R. Grayson, An Introduction of Univalent Foundations for Mathematicians, arXiv: 1711.01477
2. A. Rodin, Axiomatic Method and Category Theory (Synthese Library vol. 364), Springer, 2014
3. http://www.math.ias.edu/Voevodsky/ (Voevodsky’s memorial webpage)
4. M. Shulman and D. Licata, Calculating the fundamental group of the circle in homotopy type theory. arXiv:1301.3443 (The accompanying Agda code is available from Dan Licata's web site at http://dlicata.web.wesleyan.edu)
Updating scientific adviser models from policy-maker perspectives: a limit of debate of 'science and value' and norms of public policy
ABSTRACT. The importance of use of scientific knowledge in policy making has been widely recognised. In philosophy of science, the argument from inductive risk, which involve the ethical value judgment by scientists has become increasingly sophisticated and help us to consider the ways of normative judgments by scientific advisers in policy making. Indeed, values play a role in all scientific processes. Each scientific organisation, group, and individual has to consider ethical consequences at different levels (Steel 2016). However, most of the philosophical approaches for improving scientific adviser models in policy making have overlooked a significance of advocacy-embedded political situations. Although the process of production and use of scientific advice should be understood as a problem of boundary work, in which continuous process between demarcation and coordination by scientist and policy maker at the science/policy interface (Hoppe 2005), those studies were unwilling to reduce the problem to only science-side, that is, the issues of value-ladeness of science. The purpose of this paper is to analyse the problem which cannot be handled in the debate of ‘science and value’, to improve the scientific adviser model from the viewpoint of the policy-side. In complex and high-stakes environmental problems, it is possible to accumulate legitimate facts based on different kind of values. As a result, policy makers cannot yield a consistent view about mutually incommensurable but legitimate scientific expertise. This problem is known as “excess of objectivity” (Sarewitz 2004). How should policymakers make a judgment on adopting or rejecting conflicted scientific advice? By examining public policy norms, I find appropriate strategies to resolve value conflicts in the face of the excess of objectivity. Considering from the policy-side make enables to understand scientific adviser model in the science-policy interaction. Recent developments of scientific adviser model- pragmatic model (e.g., Brown and Havstad 2017) - also emphasised this interaction. However, there is potential for improvement in those studies. This paper confirms that the significant components of the pragmatic model are unlikely to function, at least insufficient, in practice and present a roadmap for improvement.
References
Brown, M.J. and Havstad, J.C. 2017. The Disconnect Problem, Scientific Authority, and Climate
Policy. Perspectives on Science 25(1), 67-94.
Hoppe, R. 2005. Rethinking the science-policy nexus: from knowledge utilization and science technology studies to types of boundary arrangements. Poiesis & Praxis 3(3), 199-215.
Sarewitz, D. 2004. How science makes environmental controversies worse, Environmental Science & Policy 7(5), 385-403.
Steel, D. 2016. Climate Change and Second-Order Uncertainty: Defending a Generalized, Normative, and Structural Argument from Inductive Risk. Perspectives on Science 24(6), 696-721.
In defense of a thought-stopper: Relativizing the fact/value dichotomy
ABSTRACT. Acknowledging the entanglement of factual and evaluative statements in science and society, we defend a weak, semantic version of the fact/value dichotomy. Just like Carnap’s analytic/synthetic dichotomy, the proposed fact/value dichotomy is relativized to a framework.
Putnam, in his book 'The collapse of the fact/value dichotomy' (2002), credits Hume for having introduced the dichotomy between facts and values into western philosophy. Hume’s approach to drawing the line between facts and values is a principled one, in the sense that he presupposes empiricism in order to obtain the one “correct” dichotomy. Putnam and others have criticized the dichotomy not only for its untenable presuppositions, but also for its allegedly terrible societal consequences due to its functioning as a thought-stopper.
Whereas we concur with the characterization of the dichotomy as an, albeit only potential, thought-stopper, we demur incongruities and tendentious representations in Putnam’s account of the history of the dichotomy. Furthermore, we maintain that, contrary to their bad reputation, some thought-stoppers should indeed be embraced. Instances of useful, preliminary thought-stoppers include the proof that there is no largest prime number, the well confirmed claim that there is no perpetuum mobile, and arguably the adoption of frameworks which distinguish factual from evaluative statements.
Even so, the proposed dichotomy between factual and evaluative statements is not a principled but a relativized one. Accordingly, which framework and which fact/value dichotomy to choose becomes an external question to be decided on pragmatic grounds.
ABSTRACT. In my presentation I will bring into focus a problem of recognizing values influencing science and propose that in the instances of value conflicts we are more acutely aware of the values influencing us. I will also present a possible guideline for value conflicts by hierarchy of values. Although, it is generally agreed that values have a role to play in science, the exact nature of this role or which values should be allowed in science is debatable. The value-free science ideal proposes that only epistemic values should be allowed to influence the core epistemic practices of science. Heather Douglas (2000) has made a case for the necessity of non-epistemic values in science. Douglas´ proposal stems from the argument from inductive risk. Inductive risk is the chance that a scientist might be mistaken in accepting or rejecting a hypothesis (Douglas, 2000). Douglas (2000) proposes that for assessing the possible harmful consequences of such a mistake non-epistemic values are necessary. Although Douglas (2009) distinguishes proper roles for the values in science, how to recognize these values remains unclear. She also seems to hold all values in equal standing, which presents a problem for resolving value conflicts. Others like Daniel Steel and Kyle Powys Whyte (2012), Ingo Brigandt (2015), Immaculada de Melo-Martin and Kristen Intemann (2016) have enhanced Douglas´ position. Steel and Whyte (2012), proposed a general values-in-science standard. It states, that non-epistemic values should not conflict with epistemic values in the design, interpretation, or dissemination of scientific research unless epistemic values fail to indicate a better option. Brigandt (2015) suggested that, besides epistemic considerations, social and environmental values may determine a scientific theory´s conditions of adequacy for an unbiased and complete theory. Melo-Martin and Intemann (2016) maintained that social, ethical, and political value judgements are legitimate in scientific decisions insofar as they promote democratically endorsed epistemological and social aims of the research. Nonetheless, the question of recognizing values seems to remain unanswered. The premise of my presentation is that values influencing science come more vividly forth in the context of different value conflicts. Therefore, it is important to analyze value conflicts in science. Related problem is resolving value conflicts in the context of equal values. As a solution, I propose a flexible hierarchy of values to help us resolve value conflicts in practical situations.
References
Brigandt, Ingo (2015). „Social Values Influence the Adequacy Conditions of Scientific Theories: Beyond Inductive Risk“. Canadian Journal of Philosophy 45(3), 326–356.
de Melo-Martín, Inmaculada and Kristen Intemann (2016). „The Risk of Using Inductive Risk to Challenge the Value-Free Ideal“. Philosophy of Science 83(4), 500–520.
Douglas, Heather E. (2000). „Inductive Risk and Values in Science“. Philosophy of Science 67(4), 559–579.
Douglas, Heather E. (2009). „The Structure of Values in Science“. In: Douglas, Heather E. Science, Policy, and the Value-Free Ideal, 87–114. Pittsburgh: University of Pittsburgh Press.
Steel, Daniel and Kyle Powys Whyte (2012). „Environmental Justice, Values, and Scientific Expertise“. Kennedy Institute of Ethics Journal 22(2), 163–182.
Paul Teller (University of California, Davis, United States)
Processes and Mechanisms
ABSTRACT. Processes and Mechanisms
B.4 Metaphysical Issues in the Philosophy of Science
Extended Abstract
I became puzzled by the dearth of mention of processes in the New Mechanists literature. This paper examines what I have found.
The new mechanists tend to characterize mechanisms in analogy to the idea of a machine. But the characterizations they offer of the “machine” idea omit many things to which the term “mechanism” gets applied, such as mechanisms of crystal formation, catalytic action, soil erosion, gene transcription, chemical mechanisms…. One sees mention of such examples in the literature, but they are not mechanisms in the stable “machine” sense and so are not covered by the official characterizations. Such “variably embodied mechanisms” are of explanatory interest just as much as the stably embodied, machine-like ones.
Dictionary definitions of “mechanism” include both the machine-idea and processes more generally that produce what I will call a “phenomenon of interest”. A mechanism that produces soil erosion can operate in may places but generally not within the same physical housing as in the case of a clock producing an indication of the time.
To get a more general account I start by characterizing a process as a collection of events that are systematically linked. We tend to apply the term “mechanism” when we are considering a process type that issues in some phenomenon of interest. As a process/mechanism type, such can operate repeatedly, but not necessarily within some one physical housing. The narrower notion studied by the New Mechanists uses “mechanism” as a term for a stable physical housing that can repeatedly exhibit the process producing the phenomenon of interest.
Finally, I will start work on a typology of processes/mechanisms. Here is some illustration:
Begin with linear processes, in the A causes B causes C… pattern, with dominos as the stereotypical example. These will contrast with non-linear processes that will be of many kinds: causal loops, homeostatic processes/mechanisms… The simple pendulum clock, that we refer to as a mechanism, provides a simple illustration of how the linear and non-linear can combine in a larger whole. A clock can be thought of as linearly organized. All but the first nodes of the linear chain are a simple series of 60:1 gears. But the first node in this chain is itself a non-linear complex involving interaction between the pendulum-escapement component and the beginning of the drive train. As in this example, linear processes can themselves be complex with individual nodes comprising complex non-linear interactions. Broadly, any of these might repeatedly occur within one stable physical housing, or their physical embodiments can be one offs. Either way, if the process type in question produces a phenomenon of interest, the term “mechanism” applies.
Much of the paper will work to clarify and more carefully apply terms that this abstract has so quickly introduced.
There are similarities and differences between my account and some parts of Stuart Glennan’s recent book. A full write-up will also discuss related material from Nicholson, Kuorikoski, Ramsey and doubtless many others.
ABSTRACT. Our general aim is to extend the applicability of the metametaphysics project (Tahko 2015). The new era within the metaphysics of science debate started from the D. Dennet’s real patterns concept (1991). The possibility to use the real patterns to develop the proper theory of existence has shown by D. Ross (2000). Following this understanding J. Ladyman has developed his Information-Theoretic Structural Realism (2007). We will show how the using of an ongoing notions like ontological dependency, grounding or modal epistemology may help us to extend the Ladyman’s concept and might even to solve some of it’s particular problems.
Discussing his new interpretation of Ross’s definition of existence, Ladyman noted: “in the new formulation of clause (ii) [pattern is real iff it encodes information about at least one structure of events] we want to go on saying something like what that clause now says about the information ‘in’ real patterns. But ‘encode’ is surely misleading;... we must no longer say that real patterns carry information about structures. So what does a real pattern carry information about? The answer can only be: about other real patterns. … This just means, to put matters as simply and crudely as possible: it’s real patterns all the way down” (2007, 227–8). And after the two pages full of the hard-hitting tries of explanation he concludes: “We have now explained the basis for replacing this [clause (ii)] by the following: ‘It carries information about at least one pattern P’” (230). As far as we agree with the conclusion we think that something more needs to be (or at least might to be) said here about the metaphysical interpretation of the existence of the “second pattern”. More than, what does it mean from the metaphysical perspective to be a second pattern of the second pattern? We find nothing wrong with the Ladyman’s understanding of the role of projectability notion in the scale relative non-individuals-existent ontology. But following the traits of metametaphysics project we may note at least two directions of further discussion. First, it is tempting to assess the notion of projection via the truth-making perspective. Following B. Smith’s understanding, truth-maker for a given truth should be part of what that truth is about: “roughly, it should fall within the mereological fusion of all the objects to which reference is made in the judgment” (1999, 6). He identifies the projection of a judgement with the fusion of all these things to which it refers (singularly or generically), and then defines what is to be a truth-maker in terms of this notion: a truth-maker for a judgement p is something that necessitates p and falls within its projection. What would such a truth-making projection mean for the “second pattern” existence within the Ross-Ladyman understanding of “projectability under the physically possible perspective”? The projection of a real (first) pattern needs to be made far more relevant to what the “physically possible perspective” is intuitively about. Second, we could make a next step and define truth-making in terms of grounding. As J. Schaffer suggest: “if a proposition p is true at a world w, then p's truth at w is grounded in the fundamental features of w. A truth-maker x for a proposition p at w is something such that (i) x is fundamental at w and (ii) the truth of p at w is grounded in x” (2008, 17). Is it possible to extend the Schaffer’s combining Ladyman’s and Tahko’s understandings of fundamentality? By the definition, grounding is an asymmetric, irreflexive and transitive relation, it thus induces a partial ordering, whose minimal elements are the “fundamental entities”. We think that proper interpretation of the truth-making account of the “second pattern” existence is possible. More than, if we are to succeed we will have a pretty metametaphysical explanation of the Dennett inspired search for the “real patterns all the way down”.
References
Dennett, D. Real Patterns. Journal of Philosophy, 1991, vol. 88, p. 27–51.
Ladyman, J., Ross, D., Spurrett, D., Collier, J. Every Thing Must Go: Metaphysics Naturalized. Oxford University Press, 2007.
Ross, D. Rainforest Realism: A Dennettian Theory of Existence. In: A. Brook, D. Ross, D. Thompson (eds.) Dennett’s Philosophy: A Comprehensive Assessment. Cambridge, MA: MIT Press, 2000.
Tahko, T. An Introduction to Metametaphysics. Cambridge: Cambridge University Press, 2015.
Schaffer, J. Truth-maker Commitments. Philosophical Studies, 2008, vol. 141, p. 7–19.
Smith, B. Truth-maker Realism. Australasian Journal of Philosophy, 1999, vol. 77, p. 274–91.
Underdetermination of theories, theories of gravity, and the gravity of underdetermination
ABSTRACT. Arguments for underdetermination seek to establish that evidence fails to determine a scientific theory over its rivals uniquely. Given the various ways in which underdetermination has been argued for, it is best to see those arguments as constituting a class whose members purport to list conditions under which evidential worth of rival theories become inoperative. One common requirement for underdetermination has been the acceptance of holism in theory-testing as it allows for theoretical terms to form part of scientific terminology without ever being directly confirmed by empirical tests (W.V.O Quine: 1975, 1992). Another requirement is for the rival theories to be empirically equivalent, although we argue that a suitably defined notion of ‘empirical success’ can replace the more stringent requirement of empirical equivalence. Further requirements come from acknowledging the domain-specific reliability of inductive strategies. While the unsuitability of eliminative inference in fundamental theorising has been noted (Kyle Stanford, 2006), inductive generalisation is similarly unreliable if it involves extending a theory to worlds radically different from those where the theory has proven to be empirically successful. We propose an account of underdetermination that exists between those empirically successful theories which are central to those scientific domains where otherwise reliable inductive strategies can be expected to fail. We identify theorising about gravity as particularly vulnerable to our version of underdetermination and present a brief overview of theories of gravity past and present – from Aristotelian cosmology to General Relativity and its possible modification by MOND theories – to support our assertion.
Underdetermination has erroneously been seen as a consequence of the supposed intractability of the Duhem-Quine problem (the question of how blame is to be partitioned between conjuncts in the event of falsification), which in turn is supposed to follow from holism. However, mere acceptance of holism need not entail the irresolvability of the Duhem-Quine problem, as Bayesian reconstructions of various historical episodes show. The Bayesian solution to the Duhem-Quine problem, on the other hand, should not be seen as the resolution of underdetermination. To establish this, we select two debates over the nature of gravity – the first surrounding the precessing perihelion of Mercury and its explanation by General Relativity, and the second over the correct explanation for the mass discrepancy suggested by galaxy rotation curves and cluster dispersion data. We offer a Bayesian reconstruction of these two debates to show that although Bayesian confirmation theory can model how falsification affects individual conjuncts even in the absence of any differentiating role played by the prior values of conjuncts, it offers little to assuage fears of underdetermination. For although the Bayesian machinery successfully models how evidence should affect belief in a theory, it can offer little help if the evidential worth of theories were to become inoperative, as it happens under conditions that lead to underdetermination.
Underdetermination is generally understood as engendering scepticism about the possibility of knowing certain features of the world. We stand with the scientific realists in holding that most such arguments, including ours, do not threaten sophisticated accounts of selective realism. Apart from scepticism about knowability, underdetermination also motivates scepticism about decidability. Weaker versions of underdetermination, like the one we present, suffice to establish the scepticism about non-uniqueness in theory choice. It is in relation to the problem of theory choice that we see the significance of underdetermination. Specifically, we argue that because underdetermination renders the evidential worth of theories inoperative, it leads to a particularly intransigent disagreement over theoretical descriptions of the world. We also show that underdetermination induced disagreements are epistemically desirable as they promote inter-disciplinarity, consilience, and theoretical unification. We support this conclusion by examining strategies that have been employed in the current debate over dark matter and modified gravity as the explanation for the ‘missing mass problem’.
References:
Quine, W. V. (1975). On empirically equivalent systems of the world. Erkenntnis, 9(3), 313-328.
Quine, W. V. (1992). Pursuit of truth. Cambridge: Harvard University Press.
Stanford, K. (2006). Exceeding Our Grasp: Science, history, and the Problem of Unconceived Alternatives. New York: Oxford University Press.
Some problems in the prediction vs accommodation debate
ABSTRACT. A perennial problem in the philosophy of science concerns the relative epistemic value of predicted evidence versus accommodated evidence. The question is, should we have more confidence in a scientific theory if it successfully predicted some evidence than if it was merely designed to accommodate that evidence? Predictivists think so, but they have had a hard time of producing a convincing argument for why this should be so. Numerous theories on the case have been proposed, both for and against predictivism. As it stands, most of the current defenses of predictivism fall under the banner of “weak predictivism”. Weak predictivist theories argue that prediction doesn’t have any inherent epistemic advantage over accommodation, but that sometimes it does have more value indirectly because it correlates more strongly with some other virtue that is epistemically relevant. In cases where this other virtue isn’t directly evaluable, the prediction-accommodation asymmetry emerges.
Weak predictivist theories include both agent-centered and purely content-centered approaches. The agent-centered approaches, such as those presented in White (2003) and Barnes (2008), argue that prediction is more valuable than accommodation because it tells us something about the reliability of the agent making the predictions. These inferences about the agent then redound to provide support for the theory he or she is advancing. Purely content-centered theories, on the other hand, such as advanced by Worrall (2014), argue that the prediction-accommodation issue is purely a matter of the logical relation between the theory and the evidence. A theory is supported by the evidence if the evidence is not used in the construction of the theory. The time when the theory is constructed and the evidence is discovered is wholly irrelevant.
I argue that both the agent-centered and purely content-centered theories face problems that show that they alone are incapable of accounting for the epistemic role that prediction plays in science. The agent-centered theories make the epistemic support that theories gain contingent on conditions external to the contents of the theory and its evidential consequences. The problem is, a theory can be well justified by its evidential consequences even if the external conditions that the agent-centered theories require are not met. A purely content-centered theory, on the other hand, is unable to account for the fact that sometimes external conditions do come to bear on the epistemic justification of scientific theories, and they do so in a way that produces a genuine epistemic asymmetry between prediction and accommodation. An adequate theory on the prediction vs. accommodation case must avoid the shortcomings of both agent-centered and purely content-centered theories.
Barnes, E.C. (2008). The Paradox of Predictivism. Cambridge: Cambridge University Press.
White, R. (2003). The Epistemic Advantage of Prediction Over Accommodation. Mind 112 (448), 653-682.
Worrall, J. (2014). Prediction and accommodation revisited. Studies in History and Philosophy of Science Part A 45 (1), 54-61.
Márton Gömöri (Institute of Philosophy, Hungarian Academy of Sciences, Hungary)
Why do outcomes in a long series of rolling a fair dice approximately follow the uniform distribution?
ABSTRACT. The talk will outline a new account of “probability” employed in statistical mechanics and in gambling games. The main ideas will be illustrated on the example of rolling a dice.
Suppose you roll a fair dice a large number of times. It is a robust physical fact that each outcome will occur with approximately the same, 1/6, relative frequency. The question I'd like to pose is: what is the physical explanation of this physical fact?
Consider the mechanical description of rolling a dice in terms of phase space. Each of the six outcomes carves out a region of phase space containing those initial conditions of rolling the dice that lead to the particular outcome. If the dice is fair, each of these six phase space regions has the same phase volume (Lebesgue measure) (cf. Kapitaniak et al. 2012). The rolling of the dice picks a point in phase space — the one corresponding to the initial condition realized by the roll. Suppose you roll the dice a large number of times: this picks a large sample from phase space. Now, in terms of phase space, the fact that the relative frequencies of outcomes are all around 1/6 means that the initial conditions realized in the series of rolls are approximately uniformly distributed among the six phase space regions in question. So if we are to understand why outcomes occur with approximately the same frequencies, what we have to ask is: what is the physical explanation of the fact that the initial conditions in the series of rolls are approximately uniformly distributed among the six phase space regions?
In the first part of the talk I shall review possible answers to this question based on discussion in the literature (e.g. Strevens 1998, 2011; Goldstein 2001; Frigg and Hoefer 2010; Maudlin 2011; Myrvold 2016). These answers invoke the principle of indifference, the method of arbitrary functions, the notion of typicality, and appeal to objective randomness provided by quantum mechanical phenomena. I will find all of these answers wanting.
In the second part of the talk I will outline a new answer to our question. My analysis will be based on two components: 1) Pitowsky's (2012) result that the phase volume (Lebesgue measure) is a natural extension of the counting measure on finite sets; and 2) the Principle of the Common Cause, phrased in the form that if in a statistical ensemble two properties A and B are causally independent — that is, for any member of the ensemble that facts as to whether it possesses properties A and B have no direct nor common causal connection —, then the distribution of these two properties over the ensemble must be approximately statistically independent, with respect to the counting measure or its natural extension.
My account, which can easily be shown to extend to statistical mechanics, will have two main characteristics: 1) it essentially invokes unanalyzed causal terms; on the other hand, 2) nowhere it makes any reference to an irreducible notion of “probability.”
Anomalous averages, Bose-Einstein condensation and spontaneous symmetry breaking of continuous symmetries, revisited
ABSTRACT. The study of the so-called spontaneous symmetry breaking (SSB) of continuous symmetry, is a fundamental notion in quantum statistical physics [1-7], specifically in the phase transitions theory (PTT).
At finite volume, the breaking of a continuous symmetry (U(1) symmetry group) is associated with many infinitely degenerated ground states connected between them by unitary transformations. In this sense, these states are physically equivalent, having the same energy, and being the ground state of the system understood as a superposition of them. However, in the thermodynamic limit, these connections vanish, and an infinite collection of inequivalent ground states, orthogonal to each other, arise.
On the other hand symmetries can be broken by small disturbances. Mathematically speaking, the disturbance once chosen and provided that the parameters on which it depends are fixed, selects a unique ground state for the system (vacuum).
In this sense, in the framework of the study of the free Bose gas and in the case of a superfluid model N. N. Bogoliubov [6] eliminates the above mentioned degeneracy by introducing a small term on the original energy operator, preserving the self-adjointness but suppressing the symmetry corresponding to the total number conservation law. In this context the limit thermal averages defined by using these perturbations of the original energy operators have been denominated Bogoliubov quasiaverages (QA) or anomalous averages (AA).
It must be taken into account that both Bose condensation, understood as macroscopic occupation of the ground state, as SSB of the continuous symmetry occurs only in the thermodynamic limit which in real physical systems is never reached. In this sense, the introduction of an external field does not explain by itself the broken of symmetry. Moreover an underlying question is whether there is only one restricted class of perturbations, constituted by operators associated to the same sort of particles in the condensate, compatible with the chosen order parameter and with the existence of pure states and ODLRO (off diagonal long range order) [7].
The scenario becomes even more complicated considering that all current experiments are carried out on finite atom systems (trapped Bose gases for which the total number conservation law is preserved).
In this work, the aforementioned problems, their consequences on the fundamental principles of the quantum PTT and the viability of experimentally testing the QA approach shall be discussed.
References
[1] Y. Nambu, Phys.Rev. 117, 648 (1960); Phys. Rev. Lett., 4, 380 (1960).
[2] J. Goldstone, Nuovo Cim. 19, 154-164 (1961)
[3] J. Goldstone, Phys.Rev. 127, 965-970 (1962)
[4] P. Higgs, Phys. Rev., 145, 1156 (1966).
[5] S. Weinberg, Phys.Rev.Lett., 29, 1698 (1972).
[6] N.N. Bogoliubov, Lectures on Quantum Statistics: Quasiaverages. Vol.2 (Gordon and Beach, New York, 1970
[7] W. F. Wreszinski, V. A. Zagrebnov, On ergodic states, spontaneous symmetry breaking and the Bogoliubov quasiaverages. HAL Id: hal-01342904 https://hal.archives-ouvertes.fr/hal-01342904 (2014)
Frege and Peano on axiomatisation and formalisation
ABSTRACT. It is commonplace in contemporary historical studies to distinguish two traditions in early mathematical logic: the algebra of logic tradition and the tradition pioneered by Frege. Although he never defended a logicist position, Peano is usually linked to the Fregean tradition. In this talk I shall question this association. Specifically, I shall study Frege’s and Peano’s conceptions of axiomatisation and formalisation and conclude that they developed different accounts that were, in some respects, irreconcilable.
Peano provided the first axiomatisation of arithmetic in [1889], where he distinguished the logical principles from the set theoretic principles and the arithmetical axioms. In this regard, he departed from the algebra of logic tradition, which - in other respects - had influenced him. Most likely because of the development of his logicist project, Frege failed to acknowledge the importance of this move. At the same time, he reproached Peano the fact that he did not provide a full calculus; Peano did not define any inference rule. This is in stark contrast with Frege’s presentation of the concept-script in [1893]. According to Frege, Peano could not guarantee a fully rigorous treatment of arithmetic if he did not provide the means to formalise proofs. I defend that the omission of inference rules in Peano’s early works on mathematical logic is not the result of carelessness but due to his decision to conflate the conditional and the relation of logical consequence. This position echoes the practice of algebraic logicians such as Peirce or Schröder.
Frege strongly associated the notion of formalisation to the expression of content. He rejected the perspective of producing what he called an 'abstract logic’. I defend that Frege’s notion of the formalisation of a scientific theory must not be understood as a formalisation in contemporary terms. As can be attested in Carnap’s notes from Frege’s courses at Jena of 1913–1914 [Reck; Awodey, 2004], for Frege, the formalisation of a theory implied the use of its specific basic principles and primitive symbols, whose meaning had to remain intact. In this sense, Frege intended to use the atomic formulas of a theory as building blocks and complement them with the logical resources of the concept-script.
From the last years of the 1890 decade Peano developed a notion of formalisation that was in direct opposition to Frege’s. In [1901] Padoa, a disciple of Peano, described a formalised theory as system whose primitive propositions were understood as conditions and whose primitive symbols did not have specific meaning but were intended to have multiple interpretations. The formalisation of a theory was thus not intended to preserve its content and express it in a rigorous way, as Frege defended. Peano aimed at answering metalogical questions such as the independence of the axioms of a theory. In this context, his perspective can again be associated with the algebra of logic tradition.
REFERENCES
- Frege, G. (1893). Grundgesetze der Arithmetik. Begriffsschriftlich abgeleitet, vol. I. Jena: Hermann Pohle.
- Padoa, A. (1901). Essai d’une théorie algégraique des nombres entiers, precede d’une introduction logique à une théorie deductive quelconque. In Bibliothèque du Congrès international de philosophie, Paris 1900. Vol 3. Paris: Armand Colin, pp. 309–365.
- Peano, G. (1889). Arithmetices principia nova methodo exposita. Turin: Bocca.
- Reck, E. H.; Awodey, S. (Eds.) (2004). Frege’s Lectures on Logic: Carnap’s Student Notes, 1910-1914. Chicago: Open Court.
ABSTRACT. Guiseppe Peano’s axiomatizing effort in Formulario went hand in hand with the design of an innovative mathematical notation that allowed to write any formula without recourse to ordinary language. Alongside his contemporaries such as Frege, Boole, Schröder and Peirce, Peano stressed the importance of a rigorous language for mathematics as an alternative to local, ambiguous and cumbersome formulations in ordinary language. His project was greatly influenced by Leibniz, as much in its encyclopedic scope as in its use of a universally readable ideography. Leibniz’ ambitious philosophical project had also a more modest linguistic counterpart that inspired attempts at creating an international auxiliary language (IAL), including Peano’s. Following Leibniz’ instructions for a Latin-based interlanguage, Peano created Latino sine flexione (LSF). With other mathematicians such as Louis Couturat and Léopold Léau, Peano advocated for the worldwide adoption of an IAL. Not only he used LSF in his own writings, including the fifth edition of Formulario, but he took a leading role in the IAL movement by setting up and presiding Academia pro Interlingua, where he promoted LSF as a solution to the problem of multilingualism in an age of competing nationalisms.
In its grammatical structure, LSF owes significantly to Leibniz’ project of a universal characteristics. Peano’s major intervention in the Latin language was the elimination of elements that he considered logically superfluous: inflection, grammatical gender, and redundant expression of number. He systematically replaced inflected forms of Latin words by their simplest form accompanied by separate prepositions and adverbs, in the aim of reaching a transparent syntax where all words occur only in their form found in the dictionary. For Peano, there is a good reason to get rid of inflections in Latin, since they add to its difficulty and, consequently, make it unfit for international communication in the modern world, despite the familiarity of much of its vocabulary. For Peano, eliminating inflections leads to a “minimal grammar” found also in Chinese (a model for Leibniz, as well as Frege’s Begriffsschrift), Modern English (compared to its older forms), and the symbolic language of modern mathematics.
In its analytic structure, LSF appears as a linguistic application of Peano’s ideographical endeavor that produced Formulario. Referring to Max Müller and Michel Bréal, Peano used the distinction logic/linguistics to question established linguistic forms and suggest more “rational” ones. Yet, as much as Peano owes to the idea of conceptual writing, formulated by Leibniz and carried further by Frege, other factors determined the emergence and principles of LSF; namely, advances in the historical and comparative linguistics of his time, and – not the least – the turn-of-the-century organization of scientists into a transnational community with the help of a growing number of international institutions and events.
17:45
Zuzana Haniková (Institute of Computer Science, Czech Academy of Sciences, Czechia)
On Vopěnka's ultrafinitism
ABSTRACT. It is perhaps unsurprising that the somewhat remote research area of ultrafinitism in the foundations of mathematics cautiously embraces the work of Petr Vopěnka on his Alternative Set Theory (AST) as a relevent effort. Membership in the "ultrafinitist club" is largely self-proclaimed, so one becomes an ultrafinitist by identifying as such, and Vopěnka's writings present convincing membership evidence, although he might have shied away from the label. In his own words, in designing the AST he attempted to put forward a theory of natural infinity (in which task a sizeable group of coworkers was indispensable; a seminar on the AST was run in Prague for several years in the late 1970s).
Nowadays, the AST might be viewed as a toy alternative attempt at foundations of mathematics, undertaken within classical logic. The usual shortcut to getting a first idea of AST points out a nonstandard model of $V_\omega$, with the natural numbers in the model interpreting the natural numbers of the AST, while its finite natural numbers are interpreted with the standard numbers of the model. This interpretation is in keeping with Vopěnka quoting Robinson's nonstandard analysis as a source of inspiration. Even on an initial acquaintance, AST is seen consistent relative to ZFC.
On reading further, within Vopěnka's texts one will encounter passages that subscribe directly to ultrafinitism through denying (not the existence, but) a finite status to very large natural numbers (denoted by numerals), while admitting that such considerations are classically inconsistent. Such remarks are uttered with the explanation that a reasoner in AST is not necessarily convinced by exceedingly long proofs. These read as the hallmark of mistrust of the ease with which classical mathematics embraces various notational and proof-theoretic shortcuts that divert attention from the actual proof and representation size.
Ours is a work in progress, with several aims. The first one is a reconstruction, insofar as possible, of the ultrafinitist facet of Vopěnka's work in the context of other works with the same flavour (i.a., [3,4,6]). Another aim is to highlight the usefulness of the metaphor of the dichotomy of the standard vs. nonstandard natural numbers (which result from the usual interpretation of AST) to the dichotomy of the feasible vs. unfeasible ones (which can be viewed as a prime would-be applicaition thereof). Moreover, we shall discuss the point, articulated beautifully in [2], of it being the proper logical setting that ultrafinitism is still missing, e.g. when compared to intuitionism. Although one can only guess what the theory of witnessed universes might have been, were it developed, one can assume that, along with other such theories, the logic might depart from the usual, at least in rendering voluminous (i.e., in Vopěnka's terminology, "infinite") proofs irrelevant.
References:
[1] Holmes M.R. Alternative Axiomatic Set Theories. The Stanfoer Encyclopedia of Philosophy, 2017.
[2] Krishnaswami N. Is there any formal foundation to ultrafinitism?, https://mathoverflow.net/q/44232
[3] Nelson E. Predicatrive Arithmetic. Princeton, 1986.
[4] Parikh R. Existence and Feasibility in Arithmetic. JSL 36(3), 1971.
[5] Pudlák P., Sochor A. Models of the Alternative Set Theory. JSL 49(2), 1984.
[6] Sazonov V.Y. On feasible numbers. Logic and Computational Complexity, LNCS 960, 2005.
[7] Vopěnka P. Mathematics in the Alternative Set Theory. Teubner, Leipzig, 1979.
[8] Vopěnka P. Úvod do matematiky v alternatívnej teórii množín [An introduction to mathematics in the alternative set theory]. Alfa, Bratislava, 1989.
ABSTRACT. Truth is commonly required from a proposition to count it as knowledge. On the other hand, the hypothetical character of scientific laws and the ubiquity of idealization and the ceteris paribus clause pose the problem of how science is knowledge. Thus Ichikawa and Steup (2012) claim that scientific knowledge consists in knowing the contents of theories rather than knowing some truths about the world. Some, e.g. Cartwright (1983) suggest that general scientific laws, being false, constitute an instrument of knowledge rather than knowledge themselves. Contrary to that, Popper considers science as the superior kind of knowledge. Even outdated science, one may add, radically differs in epistemic status from prejudice or mere error and may have quite an extensive scope of application.
To solve this tension, I propose to adopt a version of contextualism that is in many respect close to that of Williams’s (1996). On this view knowledge is a context-dependent notion, where a context is determined by some presuppositions, often only tacit, that are accepted by entitlement. Some presuppositions of the past, like Newton’s absolute simultaneity, are not accepted any longer. Still, Newton’s laws constitute knowledge in the context of his presuppositions and preserve a vast scope of applications. Idealizations and the ceteris paribus clause also count as presuppositions in Stalnaker’s pragmatic sense. This explains how, in some contexts but not in others, one is entitled to ignore, e.g., friction.
The version of contextualism on offer departs in some significant respects from that of Williams’s. First, presuppositions themselves are not included into knowledge. Instead, they form what Wittgenstein calls, in the face of the ultimate groundlessness of our believing, “a (shifty) river-bed of thought”. Once it is shifted, however, in a novel, more comprehensive context one can come to know the denial of some presuppositions of a former, less sophisticated context. Second, the truth-requirement for knowledge is withdrawn. Instead, knowledge in a context is defined as belief that is superwarranted relatively to that context, i.e. it is warranted without defeat at some stage of inquiry and would remain so at every successive stage of inquiry as long as the presuppositions that constitute the context are kept unchallenged.
Apart from explaining how science, including outdated science, is knowledge, the account on offer pictures an aspect of the growth of knowledge that consists in falsifying some presuppositions. In a broader, epistemological perspective, it is also applicable to the general skeptical problem like the question of (im)plausibility of the brains-in-a-vat hypothesis.
References:
Cartwright, N. 1983 How the Laws of Physics Lie, Oxford: Clarendon Press.
Ichikawa, J., Steup, M. 2012 The Analysis of Knowledge in: Stanford Encyclopedia of Philosophy, Winter 2016 version (minor correction), https://plato.stanford.edu/archives/win2016/entries/knowledge-analysis/.
Popper, K.R. 1972 Objective Knowledge, Oxford: OUP.
Stalnaker, R. 2002 Common Ground, Linguistics and Philosophy 25: 701–721.
Williams, M. 1996 Unnatural Doubts, Princeton University Press.
Wittgenstein, L. 1969 On Certainty, ed. G.E.M. Anscombe and G.H. von Wright, trans. D. Paul and G.E.M. Anscombe. Oxford: Blackwell.
ABSTRACT. Reconstruction of scientific theories as complex [1], variable dynamic and coordinated polysystems [2] has been supported by the case studies of various actual theories [3].
In a scientific theory we distinguish subsystems with specific constitutive generative elements: names, languages, propositions, axioms, models, problems, operations, procedures, values, heuristics, approximations, applications etc. As subsystems of the same theory, they are intimately interrelated and a change of any element induces many changes both in its own and other subsystems. Note, that propositional and model-theoretic conceptions of a scientific theory consider its propositions and models as respective constitutive elements. The usage of the informal language of commutative diagrams [4] allows one to separate and classify various types of interconnected amendments of theory structures.
Let α: X -> Y symbolize a transformation (in many cases it represents a relation) of one component X of a theory T into another component Y of T. For instance, if Y is a model M from T, then X can be a certain language L used to construct M.
Let µ be a certain homogeneous transformation X -> X*, such that there is the inverse transformation µ-1. Since all elements in question belong to T, µ induces the homogeneous transformation π: Y -> Y*. The set α, µ, π and ρ is commutative if the transformation ρ: X* -> Y* is such that ρ = µ-1#α#π, where # is a composition of transformations.
Factually, many real non-trivial transformations of theory elements are commutative. Let us, for example, reformulate some problem (P -> P*) in terms of a new model (M -> M*). We have here the set of four transformations 1) α: P -> M; 2) µ: P -> P*, 3) π: M -> M*; and 4) ρ: P -> P*, which will be commutative if ρ = µ-1#α#π.
Commutative transformation is T-internal, if all its constituents belong to the same theory (e.g., Le Verrier's resolution of problem of systematic discrepancies between Uranus's observed orbit and the one calculated from Newton classical mechanics through its reformulating in terms of the new model of Solar system that includes Neptune), and T-external, if some constituents belong to different theories (Bohr's resolution of the problem of atom instability in terms of his model of atom stationary orbits).
1. M.Burgin, V.Kuznetsov. Scientific problems and questions from a logical point of view // Synthese, 1994, 100, 1: 1-28.
2. A. Gabovich, V.Kuznetsov. Problems as internal structures of scientific knowledge systems // Philosophical Dialogs' 2015. Kyiv: Institute of Philosophy, 2015: 132-154 (In Ukrainian).
3. A.Gabovich, V.Kuznetsov. What do we mean when using the acronym 'BCS'? The Bardeen–Cooper–Schrieffer theory of superconductivity // Eur. J. Phys., 2013, 34, 2: 371-382.
4. V. Kuznetsov. The triplet modeling of concept connections. In A.Rojszczak, J.Cachro and G.Kurczewski (eds). Philosophical Dimensions of Logic and Science. Selected Contributed Papers from the Eleventh International Congress of Logic, Methodology, and Philosophy of Science. Dordrecht: Kluwer, 2003: 317-330.
Don Faust (Northern Michigan University, United States)
Predication elaboration: Providing further explication of the concept of negation
ABSTRACT. Full Opposition Symmetric Evidence Logic (FOS-EL) further explicates the concept of negation with machinery for positivity and negativity, extending explication given previously ([1997], [2000], and [2008]). Thereby FOS-EL can address, through analysis of axiomatizable extensions differentially realizing domain-dependent epistemological aspects of the concept of negation, questions related to negative facts long considered by Russell and others.
Earlier developed and studied [2000] Evidence Logic (EL) extends Classical Logic by providing an Evidence Space E of size any n>1, together with, for any i-ary predicate symbol P and any e in E, both confirmatory and refutatory evidential predications Pc:e and Pr:e respectively. Extending EL, FOS-EL is equipped with the further elaborating poseme and negeme evidential predications (posP)c:e, (posP)r:e, (negP)c:e, and (negP)r:e. Semantically, for any model with universe A, each of these evidential predications is a partial map from Ai to E.
Soundness and Completeness Theorems for both EL and FOS-EL are easily obtained through interpretations in carefully constructed related languages of Classical Logic (for these proofs, see [2000]).
The full opposition symmetry to be found in the further explication of the concept of negation in FOS-ELcan be elucidated as follows. Consider the finite extension of FOS-EL with axioms which assert that for any evidence value e in E,
(posP)c: e iff (negP)r: e
and
(negP)c: e iff (posP)r: e.
In this extension of FOS-EL evidence which confirms posP refutes negP (and conversely) and evidence which confirms negP refutes posP (and conversely). Indeed, in this extension of FOS-EL there is asserted one realization of ‘full opposition symmetry’, a full symmetry between the oppositional poseme and negeme predications posP and negP.
This perspective might yield helpful insights regarding the long contemplated “negative facts” considered by Russell and others: there need be no negative facts out there in the (as yet incompletely understood) real world; a negative fact is just a linguistic construct we ourselves make, confirmed by what refutes its corresponding positive fact and refuted by what confirms its corresponding positive fact.
REFERENCES
Faust, Don and Judith Puncochar, “How Does “Collaboration” Occur at All? Remarks on Epistemological Issues Related to Understanding / Working with THE OTHER”, DIALOGUE AND UNIVERSALISM 26 (2016), 137-144.
Faust, Don, “On the Structure of Evidential Gluts and Gaps”, pp. 189-213 in HANDBOOK OF PARACONSISTENCY (eds. Jean-Yves Beziau, Walter Carnielli, and Dov Gabbay), 2008.
_________, “The Concept of Evidence”, INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS 15 (2000), 477-493.
_________, “Conflict without Contradiction: Noncontradiction as a Scientific Modus Operandi”, PROCEEDINGS OF THE 20TH WORLD CONGRESS OF PHILOSOPHY, at www.bu.edu/wcp/Papers/Logi/LogiFaus.htm, 1999.
_________, “The Concept of Negation”, LOGIC AND LOGICAL PHILOSOPHY 5 (1997), 35-48.
__________, "The Boolean Algebra of Formulas of First-Order Logic", ANNALS OF MATHEMATICAL LOGIC (now ANNALS OF PURE AND APPLIED LOGIC) 23 (1982), 27-53. (MR84f:03007)
Russell, Bertrand, HUMAN KNOWLEDGE: ITS SCOPE AND LIMITS, Simon and Schuster, 1948.
______________, “Vagueness”, AUSTRALASIAN JOURNAL OF PSYCHOLOGY AND PHILOSOPHY 1(2) (1923), 84-92.
ABSTRACT. Consider the following two facts about classical first order logic (FOL): first, due to the Löwenheim-Skolem theorem, a countable first order theory with an infinite model cannot determine it uniquely up to isomorphism (thus, it allows for models of different infinite cardinalities) and, secondly, as proved in [Carnap 1943], the formal rules and axioms for the propositional operators and quantifiers do not uniquely determine their meanings (thus, there are models of FOL in which the propositional operators and quantifiers have the normal meanings and models in which they have a non-normal meaning; for instance, in a non-normal model for FOL, a sentence and its negation are both false, while their disjunction is true, and a quantified sentence “(∀x)Fx” is interpreted as “every individual is F, and b is G”, where “b” is an individual constant). Each of these two facts shows that FOL is non-categorical. Nevertheless, since the operators and quantifiers may have a normal meaning in all non-isomorphic models that are due to the Löwenheim-Skolem theorem, while they have different meanings in the non-isomorphic models associated with Carnap’s results, the two notions of categoricity should not be confused with one another. But more importantly, what is the relation between these two distinct notions of categoricity?
This question becomes even more interesting if we consider a recent argument advanced by [Van McGee 2000, 2015] for the idea that if the formal natural deduction rules of FOL are taken to be open-ended (i.e., they continue to hold even if the language expands), then they uniquely determine the meanings of the logical terms that they introduce and, consequently, the universal quantifier should be taken as ranging over absolutely everything, rather than over a subset of the universal set. This would then seem to imply, against the Löwenheim-Skolem theorem, that all models of a countable first order theory with an infinite model have the same cardinality – that of the universal set.
I argue in this paper that if McGee’s open-endedness requirement succeeds to uniquely determine the meanings of the quantifiers, then it simply eliminates the non-normal models discovered by Carnap, without succeeding in uniquely determining the range of the quantifiers in a countable first order theory with an infinite model. The argument goes like this. Let us call LS-categoricity the property that FOL lacks due to the Löwenheim-Skolem theorem, and C-categoricity the property that FOL lacks due to Carnap’s results. I argue that even if FOL were C-categorical, then it can still be non-LS-categorical, because the universal quantifier can have the same unique meaning in all the non-isomorphic models due to the Löwenheim-Skolem theorem. In addition, I argue that open-ended FOL is not C-categorical, for McGee’s open-endedness requirement does not succeed in uniquely determining the meanings of the propositional operators and quantifiers. The reason for this is that his crucial assumption, that for any class of models there is a sentence that is true just in those models, does not hold for the entire class of models of FOL. Moreover, as Carnap showed, transfinite rules for the quantifiers are needed to eliminate the non-normal models, since not even Hilbert’s ε-operator is sufficient for this. The question of the relation between the open-ended rules for the universal quantifier and the transfinite ones will be also addressed in the paper.
References
Carnap, Rudolf. 1943. Formalization of Logic, Cambridge, Mass., Harvard University Press.
McGee, Vann. 2015. ‘The Categoricity of Logic’, in: C. R. Caret and O. T. Hjortland, eds. Foundations of Logical Consequence, Oxford University Press
McGee, Vann. 2000, ‘Everything’, in: G. Sher and R. Tieszen, eds. Between Logic and Intuition, Cambridge: Cambridge University Press.
ABSTRACT. Results establishing the unique existence of a thing that satisfies certain properties are essential for the legitimate use of singular terms. A unique existence statement concerning a property F typically consists of two parts: An existence statement E(F) to the effect that there is at least one thing which is F; and a uniqueness statement U(F) that there cannot be two different things which are both F’s.
With respect to the classical first-order logic, uniqueness and existence are considered to be logically independent. This result generalizes beyond pure logic: Let T be a theory and F be a property definable in T. Then, if E(F) can be deduced from U(F) relative to T, then it can already be deduced from T, without assuming U(F). That is, even if one provide a proof of existence using uniqueness, this extra assumption is dispensable. Similarly for U(F). One may recourse to these observations to justify using any one of the uniqueness or existence statements while proving the other.
In almost all examples of unique existence proofs in mathematics, neither of the uniqueness or existence statements is used to prove the remaining statement. However, since there are significant exceptions, the logical problems related to unique existence proofs should also be of mathematical interest. For there might be some cases where one cannot legitimately or efficiently use the previously proven uniqueness proposition to prove the existence statement.
Theorems that state unique existence of functions provide good examples for us to see if we should consider restricting the positive application of a uniqueness statement while proving the corresponding existence statement. For a proof of a theorem of this sort, one needs to show that a function satisfying the required property exists and that it is unique (there cannot be more than one such functions). Moreover, the existence part of the proof typically involves, in turn, many unique existence results; to be a function requires for every object in the domain the existence of a unique object to be counted as the value of that function. This is where a question arises: assuming that the uniqueness statement of the function is known, can it be applied while constructing the pairs forming the function? (Or, the graph of the function.) An affirmative answer to this question means approving the use of the uniqueness part of the theorem while proving the existence part, though indirectly. I will argue for the view that, allowing for sets with non-existent elements, Bencivenga’s system of free set theory (FST), which he developed in [1] and [2], implies that we should restrict the use of the uniqueness results within an attempt to prove the related existence statements for functions.
References
[1] Bencivenga, E. (1976) “Set theory and free logic”, Journal of Philosophical Logic 5(1): 1-15
[2] Bencivenga, E. (1977) “Are arithmetical truths analytic? New results in free set theory”, Journal of Philosophical Logic 6(1): 319-330