In this proposed symposium we make a historical–philosophical examination of chemical ontology. Philosophers thinking about the metaphysics of science would do well to scrutinize the history of the concepts involved carefully. The idea of “cutting nature at its joints” does not offer much practical help to the scientists, who have to seek and craft the taxonomic and ontological notions according to the usual messy procedures of scientific investigation. And we philosophers of science need to understand the nature of such procedures. In this session we showcase various attempts to do such historical–philosophical work, with a focus on chemistry.
Robin Hendry will provide a general framing of the issue. The International Union of Pure and Applied Chemistry (IUPAC) has developed different systems of nomenclature for inorganic and organic substances. These systems reflect both chemistry's historical development and particular metaphysical views about the reality of chemical substances. Looking back into the history, we recognize the contingent decisions taken by past chemists that led to our present conceptions, and the possible paths-not-taken that might have led to different ontological conceptions. Such decisions were, and will continue to be, influenced by various types of forces that shape science. If the history of chemistry is a garden of forking paths, then so is the metaphysics of chemistry.
This presentation will be followed by three concrete studies. Marina Paola Banchetti-Robino will discuss the shift from vitalism to mechanicism that took place in early modern investigations of matter. This was a gradual and complex process, with corpuscularianism as an important commonality shared by the competing perspectives. She argues that aspects of vitalism and mechanicism co-existed in interesting ways in the chemical ontology of the early modern period, and that the gradual demise of vitalism resulted not from reductionism but from a physicalistic and naturalistic rationalization of chemical qualities.
Sarah Hijmans will address the history of the concept of chemical element. She starts by noting that there are two IUPAC definitions that loosely correspond to Lavoisier’s operational concept and Mendedeev’s more metaphysical concept. Little has been said about the evolution of the concept of element between the times of these two great chemists. She argues that the change in the conception of the element was part of a broader evolution of chemical practice. A view very similar to Mendeleev’s was already present in early 19th-century chemical atomism, and developed in a rather continuous way through the century.
Karoliina Pulkkinen will examine the history of the late 19th-century attempts to find periodic regularities among the chemical elements. While Meyer saw it likely that all elements were comprised of the same primordial matter, Mendeleev saw each element as a distinct, individual, autonomous entity and refrained from making representations of periodicity that suggested otherwise. Following Andrea Woody’s discussion of the law of periodicity as a theoretical practice, this paper explores how Meyer’s and Mendeleev’s ontological views on primordial matter shaped their ideas on how to represent periodicity.
The history of science and the metaphysics of chemistry
ABSTRACT. Any scientific discipline is shaped by its history, by the people within it and by the cultures within which they work. But it is also shaped by the world it investigates: the things and processes it studies, and the ways in which it studies them. The International Union of Pure and Applied Chemistry (IUPAC) has developed different systems of nomenclature for inorganic and organic substances, based systematically on their structure at the molecular scale. In different ways these systems reflect both chemistry’s historical development, and also particular metaphysical views about the reality of chemical substances. Thus, for instance, IUPAC names many inorganic substances on the basis of a system which is the recognisable descendant of the scheme of binomial nomenclature proposed by Antoine Lavoisier and his associates in the 1780s as part of their anti-phlogistic campaign. IUPAC’s nomenclature for organic substances is based on a theory of structure that was developed in the 1860s and 1870s to provide an account of various kinds of isomerism. Both of these were reforming developments: attempts to introduce order, clarity and precision into an otherwise chaotic and confused scene, based on a particular foundational conception of the field (or rather sub-field). But order, clarity and precision may come at a cost: by tidying things up in one way, laying bare one set of patterns and structures, chemists might have obscured or even buried other patterns and structures.
Suppose that one is primarily engaged in developing an account of what the world is like, according to chemistry, in the respects in which chemistry studies it. One might start with modern chemistry, studying its currently accepted theories and the implicit assumptions underlying its practices, and think about how the world would be, in the respects in which chemistry studies it, if those theories and assumptions were broadly true. In this kind of project the particular metaphysical views about the reality of chemical substances that underlie modern chemistry are of central interest, as is the story of how modern chemistry came to be the way it is. That story also includes the options not taken up: alternative systems of nomenclature based on different ways of thinking about chemical reality. That story is an indispensable part of understanding, in both historical and epistemic terms, why modern chemistry is the way it is.
How do we know that modern chemistry will present us with a coherent set of metaphysical views about the reality of chemical substances, something that can be regarded as, or can perhaps be shaped into, a metaphysics of chemistry? Of course, we don’t. Chemistry and its history might present two kinds of difficulty: metaphysical disunity in modern chemistry, and historical options not taken up, but which demand to be taken seriously. There are different ways to respond to these difficulties, not all of them being pluralist (for disunity may reflect disagreement). And we don’t know a priori that modern chemistry cannot be understood on the basis of a coherent metaphysical view of chemical reality.
Any philosopher who wishes to bring their study of science into contact with metaphysics must acknowledge the different forces that have shaped science as we find it, but acknowledging them brings with it the recognition that there are different ways to proceed. If the history of chemistry is a garden of forking paths, then so is the metaphysics of chemistry.
Early Modern Chemical Ontologies and the Shift from Vitalism to Mechanicism
ABSTRACT. From a philosophical point of view, one of the more significant changes that occurred in chemical philosophy from the late 16th to the 17th century is the shift from the vitalistic metaphysics that had dominated Renaissance natural philosophy to the mechanistic theory of matter championed by the Cartesians and Newtonians.
The shift away from vitalism and toward mechanicism was gradual rather than abrupt, and aspects of vitalism and of mechanicism coexisted in interesting ways within the chemical ontologies of many early modern chymists. In spite of the tensions between these two opposing metaphysical paradigms, one important thread that connects early modern chymical theories, whether vitalistic or mechanistic, is their ontological commitment to corpuscular theories of matter.
The historical process whereby ancient Democritean atomism was revived in the 16th century is quite complex, but it would be a mistake to assume that particulate theories of matter need imply a commitment to physicalism and mechanicism. In fact, although the atomism of such natural philosophers as Gassendi and Charleton was indeed mechanistic, one finds many examples of medieval, Renaissance, and early modern that embraced vitalistic metaphysics while endorsing a corpuscularian theory of matter.
As it happens, there is strong evidence to show that, for much of the 17th century, chemical philosophers adopted a view of matter that was both ontologically corpuscularian and metaphysically vitalistic. In other words, these chemical philosophers adhered to a particulate matter theory while also embracing the idea that chemical qualities and operations involved the action of vital spirits and ferments.
This essay will examine these ideas by focusing on some of the more significant transitional chemical philosophies of the 16th and 17th centuries, in order to establish how chymists at this time adhered to complex corpuscularian ontologies that could not be subsumed under either a purely vitalistic or a purely mechanistic metaphysical framework.
To this end, I will focus on the chemical philosophies of Jan Baptista van Helmont, Daniel Sennert, Sebastian Basso, and Pierre Gassendi and the contributions that each of these important figures made to the subtle and graduate shift from vitalism to mechanicism.
I also hope to show that the demise of vitalistic metaphysics did not result from the victory of reductionistic mechanicism but, rather, from the physicalistic and naturalistic rationalization of chemical qualities and processes that opened the door for Lavoisier to articulate his quantitative and operational conception of simple substances.
Selected References
Bensaude-Vincent, Bernadette and Isabelle Stengers, A History of Chemistry (Cambridge, Mass.: Harvard University Press, 1996).
Clericuzio, Antonio, Elements, Principles and Corpuscles: A Study of Atomism and Chemistry in the Seventeenth Century (Dordrecht: Kluwer Academic Publishers, 2000).
Debus, Allen G., The Chemical Philosophy: Paracelsian Science and Medicine in the Sixteenth and Seventeenth Centuries (New York: Dover Publications, 2002).
Levere, Trevor H., Transforming Matter: A History of Chemistry from Alchemy to the Buckyball (Baltimore: The Johns Hopkins University Press, 2001).
Newman, William R., Atoms and Alchemy: Chymistry & the Experimental Origins of the Scientific Revolution (Chicago: The University of Chicago Press, 2006).
For scientists and rational thinkers, the increasing acceptance of positions that constitute outright denial of established scientific consensus is disconcerting. In recent years, science denial movements have become more vocal and widespread, from climate change deniers via vaccination opponents to politicians whose statements are directly and openly in contradiction with established facts. The phenomenon of denial of (scientific) facts used to be confined to the fringes of our societies, but now transformed to have relevant policy effects with long-term consequences for all people and the entire globe. Both logic and philosophy of science can contribute to our understanding of this phenomenon and possibly show paths to react to it and
deal with it.
In this symposium, representatives of the International Science Council, the global umbrella organisation for all of the natural and social sciences, will engage with logicians and philosophers of science and discuss both the philosophical theories underlying the phenomenon of denial of facts and their potential consequences for science policy makers and other stakeholders.
Daya Reddy (International Science Council, South Africa)
Fake news, pseudoscience, and public engagement
ABSTRACT. In this presentation I plan to consider issues that lie at the intersection of fake news and pseudoscience, and their implications for the science community and broader society.
The term “fake news” is understood in this context to refer to information that is deliberately fabricated, and often distributed in ways that mimic the formats of news media, thus lending it, at least superficially, a semblance of credibility. Internet platforms make it possible for fake news items to be spread rapidly, with numbers of recipients or readers increased by orders of magnitude.
The dissemination of pseudoscientific arguments might be regarded as a subset of fake news. They are characterized by a lack of supporting evidence, erroneous arguments, and a general incompatibility with the scientific method.
A key question confronting scientific organizations, among others, concerns the development of interventions aimed at combating activities that undermine the scientific consensus. These are most effectively multi-dimensional, as is evident from a consideration of the relevant constituencies: from major social media platforms, to the scientific community, to the general public. Relevant aspects, which would vary according to the constituency, range from the urgency in building trust between scientists and the public; and the development of critical reading skills from the school level up. More broadly, the presentation will reexamine, in the context described, the role of and approaches to public engagement with science.
Unwitting Complicity: When Science Communication Breeds Science Denialism
ABSTRACT. The problem of science denialism, in recent years, is widely thought to have taken on a new sociopolitical urgency. While decision-makers have always been highly selective in how and when to defer to experts (and to whom), overt denial of scientific facts has moved from being a fringe phenomenon to being a determining factor in national policies (at least in some countries), e.g. concerning climate change and healthcare.
Certain segments of the public, too, seem to be susceptible to the allure of science denialism. Yet not all scientific topics seem to be equally affected by denialist movements. Instead -- similar to other pathologies of public communication, such as 'fake news' -- science denialism seems to find an audience especially amongst those who consider certain scientific facts a threat to their deeply held convictions or self-image.
Dismissing such scientific findings, then, may be an (epistemologically flawed) attempt at reducing cognitive dissonance. Merely reiterating the scientific facts in the face of such attitudes is unlikely to change people's minds and may even backfire. In order for science denialists to become willing to consider scientific evidence afresh, trust needs to be regained -- even where it was wrongly withheld.
Certain common modes of science communication, I argue, fail to do just that, and may even worsen overall attitudes towards science-at-large. In this paper, I offer a tentative taxonomy of some common 'mistakes' (for want of a better term) in the communication of science and technology, which -- against the intentions of their communicators -- tend to elicit science-denialist attitudes.
ABSTRACT. Philosophy is often perceived as an unworldly discipline with no other practical consequences than the positive consequences of clarity of thought. This does not seem to be quite true. Some of the most important thought patterns of science denialism are based on the methodology of radical doubt that was developed in philosophical scepticism. Furthermore, science denialism depends on science exceptionalism, i.e. the idea that the epistemological foundations of science are different from those of our other forms of knowledge. Science exceptionalism has been an implicit assumption in much of philosophy of science. Based on an analysis of the philosophical roots of science denialism, we will discuss what philosophers can and should do to defend science against the current onslaught of science denialism and other forms of pseudoscience.
The philosophy of mathematics has experienced a very significant resurgence of activity during the last 20 years, much of it falling under the widely used label “philosophy of mathematical practice.” This is a general term for a gamut of approaches which can also include interdisciplinary work. APMP members promote a broad, outward‐looking approach to the philosophy of mathematics, which engages with mathematics in practice, including issues in history of mathematics, the applications of mathematics, and cognitive science. In 2009 the Association for the Philosophy of Mathematical Practice (APMP) was founded — for more information, see: http://philmathpractice.org/.
In this symposium, we aim at grouping twelve submission falling under the scope of APMP. The different contributions will put into focus different aspects of the philosophy of mathematical practice––both in term of topics and of methods––and with grouping them together we aim at promoting dialogue between them. We include studies of a wide variety of issues concerned with the way mathematics is done, evaluated, and applied, and in connection therewith, with historical episodes or traditions, applications, educational problems, cognitive questions, etc.
APMP aims to become a common forum that will stimulate research in philosophy of mathematics related to mathematical activity, past and present. It also aims to reach out to the wider community of philosophers of science and stimulate renewed attention to the very significant, and philosophically challenging, interactions between mathematics and science. Therefore, it is just natural that a symposium is being submitted to this Congress on behalf of APMP. We asked the members of APMP to submit a proposal for taking part in this meeting and we made an appropriate selection of submission so as to shape a one-day program. The aim of the meeting is to manifest the presence and activity of APMP within the larger community of philosophers of science and logicians. In order to reach this aim we have opted for the format of twelve presentations that showcase the diversity of philosophical work done under the umbrella of APMP.
Heterogeneous mathematical practices: complementing or translation?
ABSTRACT. Taking a look on how braid theory was researched between the 1925 and 1950s, one may say that there was a growing tendency towards formalization as well as separating the topological definition of braid group from the algebraic one. This resulted in a distorted narrative where diagrams and diagrammatic reasoning had at best illustrative role (presenting visually known results) but were not prompting any discoveries, that is, were not considered as epistemological tools. However, a closer look discovers that these diagrams were having between the 1920s and the 1940s a dual role (see: Friedman 2018), being both illustrative and epistemological, both aspects could hardly be separated – and both were operating with the algebraic-symbolical reasoning.
A similar situation occurs when one examines the research of complex curves during the last quarter of the 19th century and the first quarter of the 20th century. As it was difficult to visualize these curves (being naturally embedded in a 4-dimensional real space), different methods were used to help the student as well as the researcher to “see” these curves: either by constructing three-dimensional models from different materials, coloring these models to visualize the fourth dimension to sketching various diagrams, which illustrated other aspects of these curves. These visual and material aspects were – as in the case of braid theory – having several roles at the same time: they were not only illustrating already known properties but also giving rise to new discoveries, acting as if they have their own reasoning.
Taking these two case studies into account, the question that stands at the center of my paper is how this variety of mathematical practices is to be analyzed together: how these heterogeneous practices – symbolical, diagrammatical and material – interact together? Do they complement each other or prompt the concept of proof as a hybrid, when several practices are interwoven? Or may one practice be translated into another? Focusing on the concept of “translation” in the broader sense of the word, can one characterize the interaction between these practices and its development as “translation” between practices? More concretely, I propose that insights from translation studies may be employed – such as “translation up” and “down” (Bellos 2011) – to explore the relationship between these different practices. That is, as argued by David Bellos, translation can be seen as an expression of a specific set of cultural values, but it can also seek actively to hide its nature as a translation, as something foreign. The question that arises is whether “translation” from one mathematical practice to another (diagrammatical to symbolical, material to diagrammatical etc.), can be (or was) seen as a “translation up”, “down”, or just as non-functioning.
Bibliography
Bellos, David (2011), Is That a Fish in Your Ear?: Translation and the Meaning of
Everything, New York: Faber and Faber.
Friedman, Michael (2018), “Mathematical Formalization and Diagrammatic Reasoning: The Case Study of the Braid Group between 1925 and 1950”, Journal of the British Society for the History of Mathematics.
https://www.tandfonline.com/doi/full/10.1080/17498430.2018.1533298
09:30
Bernd Buldt (Purdue University Fort Wayne, United States)
Abstraction by Parametrization and Emdedding. A contribution to concept formation in modern and contemporary mathematics
ABSTRACT. The traditional approach to concept formation and definition via abstraction presupposes an Aristotelian ontology and its corresponding hierarchy according to which “definitio fit per genus proximum et differentiam specificam.” According to this approach, abstraction is tantamount to removing properties and making the corresponding concept less rich, the more abstract a concept is, the fewer content it has. The traditional approach to abstraction and definition does not, however, provide an adequate model for concept formation and definition in mathematics.
What we need instead of the traditional picture is an account of concept formation and definition that is (1) true to mathematical practice; (2) true to the mathematical experience; (3) is compatible with insights from cognitive science. We take this to mean in particular that any such account should be informed by historical case studies (to satisfy (1)) and explain why and how abstract concepts are oftentimes perceived as more powerful and richer, not poorer in content (in order to meet (2)). Requirement (3) needs to be in place for keeping the analysis scientifically sound.
Recent accounts of abstraction in mathematics approach the topic by rehashing the development of modern mathematics since the 19th century and, consequently, emphasize aspects such as algebra (see, e.g., [2]) set theory (see, e.g., [3]), the rise of category theory (see, e.g., [5], [6]), or link the development in mathematics to broader cultural shifts (see [4]). These studies meet to a certain extent (1) and (2). This paper adds to the existing literature by homing in on a topic that lies in the intersection (1) and (2), namely, the question why abstract concepts are perceived as more powerful and richer, not poorer in content. It does so not by tracing any historical developments but by using a number of selected case studies to identify and then discuss various techniques for abstraction that have so far not received proper attention. An account of requirement (3) was given [1].
References
[1] Removed for blind review.
[2] Corry, Leo. Modern Algebra and the Rise of Mathematical Structures (= Science Networks – Historical Studies; 17), Basel: Birkhauser (1996), 299–322; 2nd rev. ed. 2006.
[3] Ferreir´os, José. Labyrinth of Thought. A History of Set Theory and Its Role in Modern Mathematics (= Science Networks – Historical Studies; 23), Basel: Birkh ̈auser (1999), 299–322; 2nd rev. ed. 2007.
[4] Gray, Jeremy. Plato’s Ghost. The Modernist Transformation of Mathematics, Princeton: Princton UP (2008).
[5] Marquis, Jean-Pierre. “Mathematical Abstraction, Conceptual Variation and Identity,” in: Schroeder-Heister, Peter et al., Logic, Methodology and Philosophy of Science. Proceedings of the 14th International Congress (Nancy). Logic and Science Facing the New Technologies, London: College Publications (2014), 299–322.
[6] Marquis, Jean-Pierre. “Stairway to Heaven: The Abstract Method and Levels of Abstraction in Mathematics,” in: The Mathematical Intelligencer 38:3 (2016), 41–51.
10:00
Andrew Aberdein (Florida Institute of Technology, United States)
Virtues, arguments, and mathematical practice
ABSTRACT. Several authors have proposed argumentation theory as a methodology for the study of mathematical practice [2]. Formal logic serves the traditional purposes of philosophy of mathematics very well. However, the philosophy of mathematical practice is concerned not just with formal derivation but with the social processes whereby mathematicians gain assent for their conjectures. Since formal logic is less well-adapted to the analysis of arguments in this wider sense, it is natural to look beyond it to argumentation theory, a discipline concerned with the analysis of natural language argument.
Several authors have proposed virtue theory as an approach to argumentation theory [1]. Virtue theories of argument shift the focus away from arguments as abstractions onto the interpersonal nature of argumentation, stressing the importance of arguers, respondents, and audiences, and especially the character of these participants.
Despite some overlap amongst their advocates, these two trends have never been addressed together. In doing so, it is natural to ask if their conjunction entails a virtue theoretic approach to mathematical practice: must the virtue theorist of argument also be a virtue theorist of mathematical practice? A negative answer to this question is not impossible. It could be held that those aspects of mathematical practice that lend themselves best to analysis in terms of argument do not correspond to features of argumentation theory where a virtue approach is of most value. In particular, some virtue theorists of argument deny that theirs is an all-embracing account, insisting that some issues, notably the appraisal of arguments, must be handed over to another theory [3].
Nonetheless, this paper defends a virtue argumentation theory of mathematical practice. It does so on two grounds. Firstly, there are significant but neglected areas of both argumentation theory and the study of mathematical practice where a shared virtue approach is potentially salutary. For example, conventional approaches in each discipline pay little attention to the contribution the respective practice makes to human flourishing [4]. Secondly, mathematical practice is potentially a valuable testbed for the ambitious varieties of virtue argumentation theory. Virtue accounts have already been proposed for aspects of mathematical practice corresponding to argument appraisal, such as the social acceptance of proofs [5]. The success of such accounts would suggest that virtue approaches can be of comparable utility within argumentation in general.
[1] Aberdein, A. and Cohen, D. H. (2016). Introduction: Virtues and arguments. Topoi, 35(2):339–343.
[2] Aberdein, A. and Dove, I. J., eds (2013). The Argument of Mathematics. Springer, Dordrecht.
[3] Gascón, J. Á. (2016). Virtue and arguers. Topoi, 35(2):441–450.
[4] Su, F. E. (2017). Mathematics for human flourishing. The American Mathematical Monthly, 124(6):483–493.
[5] Tanswell, F. (2016). Proof, Rigour & Informality: A Virtue Account of Mathematical Knowledge. PhD thesis, University of St. Andrews.
It is a widespread view among more-or-less realist philosophers of science that scientific progress consists in approach towards truth or increasing verisimilitude. This position has been elaborated within the fallibilist program of Karl Popper, who emphasized that scientific theories are always conjectural and corrigible, but still later theories may be “closer to the truth” than earlier ones. After the debunking of Popper’s own definition of truthlikeness by David Miller and Pavel Tichý, a number of approaches have been developed in order to solve or to circumvent this problem (an early overview is found in Kuipers 1987). The logical problem of verisimilitude consists in finding an optimal definition of closer to the truth or the distance to the truth. The epistemic problem of verisimilitude consists in evaluating claims of truth approximation in the light of empirical evidence and non-empirical characteristics.
So far, post-Popperian theories of truth approximation have usually assumed, like Popper’s own failing attempt, some kind of deterministic truth to be approached. This target could be descriptive or factual truth about some domain of reality, as expressed by universal laws, or the nomic truth about what is physically or biologically possible. These approaches, including most of the recent ones, are in agreement about the assumption that ‘the truth’ concerns a deterministic truth. However, they are deviating from each other in some other essential respects, especially concerning questions of logical reconstruction (qualitative vs. quantitative, syntactic vs. semantic, disjunction- vs. conjunction-based, content- vs. likeness-based) or concerning adequacy conditions for verisimilitude. Some useful overviews have been published about the state of the art (cf. Niiniluoto 1998, Oddie 2014).
In the symposium, adherents of such theories will now direct their attention to designing extensions to approaching probabilistic truths. Here the truth concerns a collection of statistical facts or the objective probabilities of some process, or probabilistic laws. Again the task is to find appropriate measures for the distance to such probabilistic truths and to evaluate claims about such distances on the basis of empirical evidence. Moreover, various well-known probabilistic enterprises can be (re-)construed as also dealing with truth approximation, if applied in such probabilistic contexts. For example, Carnapian inductive logic can be seen in this light (Festa, 1993). Similarly for straightforward Bayesian approaches, if applied in such contexts. Such reconstructions will also be addressed, including the interesting question whether these reconstructions can be seen as concretizations of deterministic truth approximation. In other words, one may ask whether deterministic measures of truthlikeness are special or limiting cases of probabilistic ones.
The main aim of this symposium is to bring together the search for such extensions and reconstructions. The significance is of course that the unified perspective on deterministic and probabilistic truth approximation will be illuminating and will stimulate further separate and comparative research. The probabilistic approaches that will be presented at the symposium are listed below (full abstracts are separately submitted, with the acronym APT).
Chair:
Theo Kuipers (University of Groningen, Netherlands)
In the general problem of verisimilitude, we try to define the distance of a statement from a target, which is an informative truth about some domain of investigation. For example, the target can be a state description, a structure description, or a constituent of a first-order language. In the problem of legisimilitude, the target is a deterministic or universal law, which can be expressed by a nomic constituent involving the operators of physical necessity and possibility. The special case of legisimilitude, where the target is a probabilistic law, has been discussed by Roger Rosenkrantz (Synthese, 1980) and Ilkka Niiniluoto (Truthlikeness, 1987, Ch. 11.5). The basic proposal is to measure the distance between two probabilistic laws by the Kullback-Leibler notion of divergence, which is a semimetric on the space of probability measures. This idea can be applied to probabilistic laws of coexistence and laws of succession, and the examples may involve discrete or continuous state spaces. These earlier studies should be elaborated in three directions. First, other measures of divergence could be considered. Secondly, if deterministic laws are limiting cases of probabilistic laws, then the legisimilitude of the latter should be reducible to that of the former. Thirdly, a solution should be sought for the epistemic problem of estimating degrees of probabilistic legisimilitude on the basis of empirical evidence.
ABSTRACT. Popper introduced the notion of the verisimilitude (or truthlikeness) of a scientific theory or hypothesis in order to make sense of the idea that the goal of inquiry is an increasing approximation to “the whole truth” about the relevant domain. Post-Popperian accounts of verisimilitude rely on quite different intuitions, and disagree from each other on some crucial features of the notion of truthlikeness. However, they share an important assumption, i.e., that both theories and the truth are “deterministic” or “categorical”. In other words, both a theory and the truth are commonly construed as propositions of some language, and truthlikeness is a matter of closeness or similarity between the relevant propositions.
To illustrate, consider the following toy examples. Suppose that the (unknown) truth about tomorrow’s weather in some location is that it will be hot, rainy, and windy. If Adam says that it is hot and rainy, and Eve says that it is hot and dry, it seems clear that Adam’s beliefs will be closer to the truth than Eve’s. All available accounts of truthlikeness can provide this kind of assessments. However, they are not well equipped to deal with cases like the following. Suppose that Adam thinks that the probability of rain tomorrow is 80%, while Eve assesses such probability at 30%; again, it seems that Adam’s estimate will turn out to be more accurate than Eve’s. Or suppose that, the credences of Adam and Eve being the same as above, the actual frequency of rainy days in the relevant location is 90%; again, one would say that Adam’s beliefs are closer to the (objective, probabilistic) truth than Eve’s. In order to make sense of these intuitive judgments, one needs to extend the notion of truthlikeness to probabilistic contexts.
In this paper, we address this issue and provide a unified account of both deterministic and probabilistic truth approximation. More specifically, we proceed as follows. First, we show how to measure the truthlikeness of both deterministic and probabilistic theories in terms of the probability they assign to the “basic features” of the relevant domain. Second, we apply such measure to four different possible cases, investigating truth approximation construed as increasing closeness: 1) of a deterministic theory to the deterministic truth; 2) of a probabilistic theory (a so called epistemic state) to the deterministic truth; 3) of a deterministic theory to the probabilistic truth; and 4) of a probabilistic theory to the probabilistic truth. Interestingly, in our account deterministic truth approximation turns out to be a special case of probabilistic truth approximation. Finally, we discuss the implications of our approach for a number of issues usually discussed across different pieces of literature, including so-called epistemic utility theories, scoring rules for measuring the distance between probability distributions, and measures of the truthlikeness of “tendency” hypotheses (especially relevant in the social sciences).
10:00
Gerhard Schurz (Department of Philosophy, Heinrich Heine University Duesseldorf, Germany)
Approaching objective probabilities by meta-inductive probability aggregation
ABSTRACT. Section 1: Predictive success of probabilistic hypotheses and methods.
Objective statistical laws over infinite domains inform us about frequency limits in random sequences, but they don't deductively entail any observation statements. The only way to obtain from them predictions about observable events is by way of subjective probability assertions, derived by means of a 'statistical principal principle' which connects statistical with subjective probabilities.
The probabilistic single case predictions obtained from objective probability hypotheses have the form "the probability of an unobserved event is such and such", formally "P(X=1) = r", where X is an (assumedly) binary event variable(X in {0,1}). But how should the truthlikeness of such a probabilistic prediction in relation to the actual event e be scored? It is a well-known fact that according to the scoring function that is based on the absolute distance |r-e| one should not predict one's probabilities, but rather, one should predict according to the so-called maximum rule: "predict 1 if r greater-equal 0.5 and otherwise predict 0". A solution of this problem going back to Brier is the use of so-called proper scoring functions (based, e.g., on quadratic loss functions): with their help one maximizes one's expected score exactly if one predicts one's probabilities.
Section 2: Bayesian prediction games.
Thus I suggest to estimate the truthlikeness of a probabilistic hypotheses or, more generally, of a probabilistic prediction method, by its predictive success as measured by a proper scoring function. Technically I implement this in the form of a probabilistic prediction game ((e),PR), which consists of a sequence (e) of events e_1,e_2,... and a finite set PR = {P_1,...,P_m} of probabilistic predictors (hypotheses or methods) which are identified with their probability functions. At each time point n the task of every predictor P_i consists in delivering a probabilistic prediction of the next event: "P_i(X_{n+1}=1) = r". Probabilistic prediction methods are assumed to be able to learn based on the events observed so far; apart from that they may be of any sort.
Section 3: Meta-inductive probability aggregation. In the final part of the paper I turn to the problem of probability aggregation. Here the problem is to find a reasonable collective probability function based on a given set of individual probability functions P_i (e.g., of experts or peers). A well-known method is arithmetic probability aggregation; the open question in this field is how to determine the optimal weights. If we assume that probability aggregation takes place within a prediction game as describe above, then deep results about meta-induction (author 2008, 2019) can be utilized to define a aggregation method P_MI that is guaranteed to be optimal in the following sense: for every possible event-sequence (e) and set of individual prediction methods PR = {P_1,...,P_m} there exists a success-dependent weight assignment w_n(i) such that the long-run success of P_{MI,n} =def [Sum_{1<=i<=m}w_n(i).P_i] is at least as good or better than the maximal success among the individual predictors P_i, even if the best predictor is permanently changing.
Sabine Thuermel (Munich Center of Technology in Society, TU Muenchen, Germany)
Smart Systems: The Power of Technology
ABSTRACT. In 2008 the vision of a Smart City was described as follows: “the goal of such a city is to optimally regulate and control resources by means of autonomous IT systems” (Siemens 2008, p.35). Ten years later concrete implementations of subsystems of such a city already exist as Smart Mobility for the optimal regulation of traffic, Smart Energy for efficient energy management or Smart Health for ambient assisted living. The prerequisite of any smart system is a sensor rich and datafied environment. The data gained are used by predictive algorithms to predict future behavioural patterns and optimize the resources accordingly. The focus is on providing knowledge under conditions of uncertainty in order to know ahead and act before trying to streamline processes towards enhanced efficiency. In the next development stage, the transition from prediction to prescription takes place: future behaviour is not only anticipated but formed. Context-specific, adaptive micro-directives (Casey/Niblett 2015) may be incorporated in future intelligent infrastructures to guarantee optimal service from a systems’ perspective and nudge or even coerce the human participants towards the desired behaviour. Thus such prescriptive smart systems manifest power relations demonstrating the power of technology in a Foucauldian way: “power is employed and exercised through a netlike organization“ (Foucault 1980). However, in these smart systems humans may not only behave as intended but also act in a subversive way demonstrating that “individuals are the vehicles of power, not its points of application” (ibidem, p.98). Thus, even if these environments restrict human autonomy, they also open up possibilities for undermining such systems. They are “dispositifs” in the Foucauldian sense possessing the dual structure of manifestation of power and the chance of subverting it.
References:
Casey, Anthony J., Niblett, Anthony 2015 The Death of Rules and Standards, University of Chicago Coase-Sandor Institute for Law & Economics Research Paper No. 738.
Foucault, Michel 1980 Power/Knowledge: Selected Interviews and Other Writings 1972–1977, Harvester Press: London.
Siemens 2008 Sustainable Buildings – Networked Technologies: Smart Homes and Cities, Pictures of the Future, Siemens AG: Munich Herbst 2008.
Challenges of New Technologies: The Case of Digital Vigilantism
ABSTRACT. This paper is part of the bigger project where different philosophical and sociological approaches are evaluated and merged in order to understand and to show how new technologies could change political life. It is very important that to work in the one discipline is not enough to try to understand the new phenomena. The main aim of this paper is, by using the case of Digital Vigilantism, to analyse what kind of change the technologies bring in and how understanding of the self and self-act is changing during the process when more and more new technologies are integrated in people’s everyday life. The swarm is a metaphor, which Zygmunt Bauman uses to show how understanding of communities is changing in liquid modernity. The swarms are based on untied, uncontrolled, short-term relationships between consumers/users to achieve some goals (Bauman, 2007). Swarms could be very massive in numbers and have a lot of power for a very short period. One of the examples could be Digital Vigilantism, which is an act of punishing certain citizens (they are believed to deserve being punished) by other Internet users. One of the type of Digital Vigilantism is putting someone's personal information on display for everybody to spread shaming acts. Sometimes this kind of act gains enough power to change political agenda. It is important to see, as states Daniel Trottier, technological tools makes visibility (profiles, social media and so on) as a weapon to control the internet (Trottier, 2017). Moreover, the moral norms, which are used to act as vigilantes, are very simplified, puritanical, and straitlaced. There are no room for understanding that human acts could be complex and not straightforward. In addition, the DV case lets us rethink the problem of responsibility, o as Denis Thompson (1980), the problem of many hands, formulated it. The new technologies makes this problem much more difficult and problematic, in swarms we do not share responsibility, if something goes wrong, everybody leaves the swarm. The moral issues of responsibility is unnecessary burden, which should be minimized, so that is why the morality is changed by the ethics/rules. The third problem is that people are interested in some actions for a very short period, the speed is enormous, but the real political act/change requires an active and stable effort. The main intrigue is whether the political act itself will change influenced by the swarm effect? “Twitter diplomacy” is becoming a new norm, but whether it will works to sustain stable life or the permanent chaos is the new foundation for the new political order?
Zygmunt, Bauman. "Consuming life." Polity, Cambridge (2007).
Trottier, Daniel. "Digital vigilantism as weaponisation of visibility." Philosophy & Technology 30.1 (2017): 55-72.
Thompson, Dennis F., 1980, “Moral Responsibility and Public Officials: The Problem of Many Hands”, American Political Science Review, 74(4): 905–916.
Juraj Hvorecky (Institute of Philosophy, Czech Academy of Sciences, Czechia)
Disputing unconscious phenomenality
ABSTRACT. Several influential accounts of unconscious mental states portray them as direct replicas of their conscious counterparts, minus the consciousness. Rosenthal (1999) famously argues that unconscious states possess “the very same” properties as conscious ones. Coleman (2015), in his protopanpsychism, uses the similar argumentative strategy: the qualitative character of a mental state is already fully present at the unconscious level and it only differs from the conscious level in its (un)availability to the subject. These claims follow directly from an assumption about consciousness, shared by both authors. On their account, consciousness results from a single operation that makes content available to the subject, either by operation of a higher-order thought or an ‘awareness’ procedure. These operations serve a sole purpose of making the content subjectively accessible, so it has to be fully there, prior to its uptake into consciousness.
Recently, Marvan and Polák (2017) came with few novel arguments that support what they call a ‘dual’ view, i.e. the claim that phenomenal character of mental states is present independently of its conscious status.
I argue that arguments for unconscious phenomenality perfectly matching conscious phenomenality, are unpersuasive. I start from an observation that most of the examples used are perceptual and they tend to come from a single modality. While it might be the case that a perception of a vividly red patch possesses all of its phenomenal qualities prior to its appearance in consciousness, the likelihood decreases with increasing complexity of phenomenal states. Examples of multimodal and category perception present an especially challenging area for the defenders of the dual view. Well documented cases of multisensory integration, such as McGurk effect, parchment skin illusion or spatial ventriloquism, illustrate my point. In all of them, the resulting subjectively perceived state is not a sum of components from different modalities, but a newly emergent quality that does not correspond to a sum of qualities that were processed along various single modality pathways. Similarly, category perception, especially of the kind that crosses various modalities (such as seeing something as edible or hearing something as dangerous) indicate complex processing that is unlikely to appear complete already at the unconscious level.
While the argument is not conclusive, it places the burden of proof on any defender of the ‘dual’ theory to show that instead of assuming a phenomenal identity in both conscious and unconscious cases, one needs to establish it for all the diverse cases. If it turns out that the ‘dual’ view cannot meet this challenge, alternative theories of unconscious states are needed.
S. Coleman (2015) Quotational higher-order thought theory, Phil. Studies 172, 10, 2705-2733.
T. Marvan and M. Polák (2017) Unitary and dual models of phenomenal consciousness,
Consciousness and Cognition, 56, 1-12.
D. Rosenthal (1999) Sensory Quality and the Relocation Story, Phil. Topics 26, 1/2, 321-350.
Neuroscience: science without disguise. A critique of Manzotti's and Moderato's dualistic account of neuroscience
ABSTRACT. In their 2014 article, Riccardo Manzotti and Paolo Moderato state that neuroscience is "dualism in disguise". According to the Authors, neuroscience uses a dualistic framework implicitly, as classical physicalist accounts are not capable of explaining all mental phenomena, consciousness in particular. In this paper, after shortly referring to Authors' main arguments, I present the classical definitions of basic neuroscience and cognitive neuroscience, and the relevant frameworks of explanation (compared to some of the other classical concepts of explanation). I enlist several – methodological and epistemic – problems that neuroscientific accounts of mental phenomena meet and describe a recent mechanistic/functionalistic framework by application of which some of these obstacles could be overcome. In effect, I am aiming at showing that Manzotti's and Moderato's “accusations” of neuroscience being dualistic are invalid, and – from the methodological perspective – are most likely rooted in some form of a misunderstanding of the concept of scientific explanation.
Manzotti, R., Moderato P., (2014) "Neuroscience: Dualism in Disguise", Contemporary Dualism: A Defense, Andrea Lavazza, Howard Robinson (Eds.), Routledge.
ABSTRACT. Social sciences are often subject to moral biases. This phenomenon manifests itself at least in two ways. First, moral considerations can distort the research itself, by affecting the search for and the assessment of evidence, the acceptation of a thesis, the judgement about what is worth studying and the focus on a research program. As an example, Horowitz, Haynor and Kickham (2018) explain sociologists’ aversion to cultural explanations by an ideological bias linked to the supposed sociology’s social justice mission. Second, the moral assessment is also characteristic of the public perception (e.g., by politicians and the media) of social sciences. For instance, ex-French Prime Minister Manuel Valls, after the terrorist attack in Paris, used the expression of « culture of excuse » in order to denigrate and morally condemn sociologists who try to explain this kind of social phenomenon.
I propose first to distinguish, on the model of the distinction between normative theories in ethics, two kinds of moral biases which affect social sciences : deontological biases and consequentialist biases. The first consists in assessing a thesis in the light of its conformity with a moral norm. For example, it is often claimed, in the scientific or public debate, that such a sociological theory « disrepects » social agents by removing all sense of responsibility. In this sense, it would be morally wrong to accept this theory. The second morally assesses scientific theses by the states of affairs they bring about. A sociological theory is blamed, for instance, because it is supposed to stigmatize social agents and reinforce their behavior. In other words, it would be morally dangerous to accept this social theory.
After explaining when these kinds of assessment are really sophistical, I offer two kinds of reasons (internal and external) in order to explain why deontological and consequentialist biases affect specifically social sciences. Then, I argue, following Goldman’s social epistemology way of thinking, that such assessements are epistemically deleterious for both science and society. I finally conclude by proposing some ways of neutralizing such moral biases and by defining a new « role responsibility » (Douglas, 2003) for social scientists. This responsibility is not, strictly speaking, epistemic (as is, for example, the requirement to follow the evidence wherever it leads) nor moral but « pragmatico-epistemic ». This kind of responsibility, which relates to the epistemic consequences of the social sciences and the requirement of neutralizing the moral way of problematizing social issues, should, among others, shape the way in which scientific results are presented.
Cofnas, N. (2016), « Science is not always “self-correcting” : fact–value conflation and the study of intelligence », Foundations of Science, 21 (3):477-492.
Douglas, H. (2003), « The Moral responsibilities of Scientists (Tensions between Autonomy and Responsibility », American Philosophical Quarterly 40 (1):59 - 68
Goldman, Alvin (1999), Knowledge in a social world, Oxford, Oxford University Press.
Horowitz, M., Haynor, A. and Kickham, K. (2018), “Sociology’s Sacred Victims and the Politics of Knowledge: Moral Foundations Theory and Disciplinary Controversies”, The American Sociologist, 49 (4):459–495.
Laudan, L. (1977), Progress and its Problems : Toward a Theory of Scientific Growth, University of California Press.
Gabor Kutrovatz (Eotvos University of Budapest, Department of Astronomy, Hungary)
What mature Lakatos learnt from young Lakatos
ABSTRACT. Prior to his career in England, Imre Lakatos spent his post-university years in Hungary. Because of his well known involvement in the political turmoil of the Hungarian post-war era, his dubious political activity has been a focus of several studies and discussions. Much less is said about the philosophical content of his Hungarian writings. Apart from his often cited commitment to Hegelian roots in his philosophy of mathematics, as well as the usually vague references to the Marxist framework of his intellectual background, only a few details are known of his early adventures in philosophy.
After his graduation from the university of Debrecen (1945), and before his studies in Moscow (1949) and the following imprisonment, Lakatos published a relatively large number of papers on science and scientific education (for a partial list and summaries, see Kutrovátz 2008). The main purpose of these works was to develop a Marxist interpretation of scientific progress and to criticize improper (idealist, bourgeois) forms of scientific thought. Perhaps most notable of these are the two papers representing his (lost) doctoral dissertation of 1947 (for an English translation, see Lakatos 2002).
During his final years in Hungary Lakatos turned his interest to mathematics, and this line of research continued through his Cambridge years. He returned to philosophy of science in general only in the 1960s, developing his theory of the MSRP. The fundamental question of this paper is whether there is anything at all that Lakatos in England shares with Lakatos in Hungary a decade earlier, or he started everything anew. Based on his wiritings and on unpublished material from the LSE Lakatos Archives, I propose that his intellectual framework in the post-war period was shaped by the following principles:
1. Science is a never ending dialectical process of conceptualizing nature.
2. Science and society are profoundly interconnected.
3. Science is not an ideology or world view.
4. In science, theory and practice are inseparable.
5. Science must be taken from capitalists and given to the proletariat.
It seems that items 1-3 remain unchanged among the commitments characterizing his post-immigration period. Ad 1, Proofs and refutations identifies this dialectical process in mathematics, while Popperian philosophy lends a new sense to the never ending nature of science. Ad 2, the Marxist background is replaced with sensitivity to the external/internal distnction and its contingency on rational reconstruction. Ad 3, the science/ideology dichotomy remains prevalent, with an additional interest in the demarcation problem. On the other hand, 4) weighs relatively less in his later writings, as he abandons explanations by ’modes of production’, and 5) naturally disappears with his rejection of official communist ideology.
References
Kutrovátz, G. 2008. "Lakatos's philosophical work in Hungary" Studies in East European Thought 60/1-2: 113-133.
Lakatos, I. 2002. Imre Lakatos’ Hungarian Dissertation. A Documentation arranged by Gábor Kutrovátz. In G. Kampis, L. Kvasz & M. Stöltzner (Eds.), Appraising Lakatos. Mathematics, Methodology and the Man (pp. 353-374), Dordrech-Boston-London: Kluwer Academic Publishers, 2002.
CANCELLED: Don’t be a Demarc-hater: Correcting Popular Misconceptions Surrounding Popper’s Solution to the Demarcation Problem
ABSTRACT. Here are three common philosophic myths:
1. Falsifiability, Karl Popper's demarcation criterion, sets out the boundaries of the natural sciences from non-science or pseudo-science.
2. The criterion explicitly applies solely to singular theories that are universal in scope.
3. It is is his sole criterion of demarcation.
These three myths are each expressed in, for instance, Philosophy of Science: The Central Issues: 'According to Popper, a theory is scientific if and only it is falsifiable' (Curd 2013, 1307). As Curd goes on to claim,
‘Popper's simple idea does not work. ... [F]alsifiability is both too weak and too strong. It is too weak because it would allow as scientific any number of claims that are testable in principle but that are not, by any stretch of the imagination, scientific. It is too strong because it would rule out as unscientific many of the best theories in the history of science.’ (Curd 2013, 68)
A number of objections against (1)--(3) share similar features. In brief, Popper's demarcation criteria are (so his many critics claim) too broad or too narrow in scope, thereby failing to include some (or all) paradigmatic ‘scientific' theories or exclude some (or all) paradigmatic ‘pseudo-scientific' theories.
As many of these philosophers of science see it, these objections directly lead to the downfall of the Popperian programme and later shifts in emphasis within the discipline. However, I argue philosophers of science and historians of philosophy of science have misrepresented and continue to misrepresent both Popper's problem of demarcation and proposed demarcation criteria.
By examining both the original German version and English translations of Logik der Forschung and the extensive oeuvre of Popper throughout his philosophic career, I show that these objections are spurious. In reality,
1*. Popper's demarcation problem is to determine if there are necessary and sufficient conditions for drawing borders between what is empirical and what is non-empirical (encompassing the domains of much of traditional metaphysics, analytic statements, normative statements, mathematics and logic).
2*. The criterion of falsifiability explicitly only applies to large sets of statements.
3*. Popper set forward a second--almost entirely neglected--criterion of demarcation that classifies individual statements as either empirical or non-empirical.
Consequently, many philosophers of science have dismissed a philosophical programme based on a mischaracterisation; they have been shadowboxing against a philosophical ghost.
That is to say, it is not disputed that many objections are effective against (1)-(3), for hardly anyone can dispute their effectiveness both as a matter of an analysis of the direction taken in history of philosophy of science away from the Popperian programme and the apparent deductive validity of many of these arguments. Nevertheless, none of the objections are sound, for these objections depend on accepting these myths and all three myths are demonstrably false. This result leads to a reevaluation of much of the Popperian programme, historical work on early to mid-20th century analytic engagement with the programme, as well as the early 20th-century debate on demarcation.
10:00
Sina Badiei (University of Toulouse - Jean Jaurès, France)
Karl Popper’s Three Interpretations of the Epistemological Peculiarities of the Social Sciences
ABSTRACT. In this paper, I will show that Karl Popper’s philosophical oeuvre contains three different interpretations of the epistemological peculiarities of the social sciences, and more specifically of economics. At first, and most notably in his “Poverty of Historicism”, he relied on the epistemological insights that he had developed thanks to his works on the epistemology of physics to propose an epistemological account of the social sciences very similar to that of physics. This epistemological account, which makes extensive use of the criterion of falsifiability, is the most famous one. It is, for example, this account that considerably influenced Milton Friedman’s highly influential epistemological treatise, “The Methodology of Positive Economics”, written in 1953. However, it is often overlooked that Popper later on changed his position. Already in “The Open Society and its Enemies”, we find the general contours of a second interpretation of the epistemology of the social sciences that is very different from the previous one. This second account develops a distinctly normative epistemology for the social sciences, but its presence in the book is not adequately taken note of because of the concomitant presence of Popper’s first epistemological position in the book. It will be shown that Popper maintained these two different positions because he could not, at the time, make up his mind about the exact nature of the relationship between theoretical and historical social sciences. It was only in two later texts, “The Logic of Social Sciences” and “Models, Instruments, and Truth”, that he abandoned the distinction between theoretical and historical social sciences, in order to argue in favor of the fundamental role played by history in all social sciences. This then led him to extend the logic of the situation, the form of logic that he had previously developed for historical social sciences, to all social sciences, thereby coming up with his third account of the epistemology of the social sciences. It will be shown, moreover, that by taking this latter development into account, we can give more accuracy to the second position that he had developed in “The Open Society and its Enemies”. I will try to argue that it is this second position, which is about developing a normative epistemology for the social sciences, that constitutes Popper’s most original contribution to the epistemological debates on the social sciences.
• Boylan, Thomas A., and Paschal F. O’Gorman, eds. 2008. Popper and Economic Methodology; Contemporary challenges. Abingdon: Rourledge.
• Friedman, Milton. 1953. Essays in Positive Economics. Chicago: University of Chicago Press.
• Gorton, William A. 2006. Karl Popper and the Social Sciences. Albany: State University of New York Press.
• Notturno, Mark A. 2014. Hayek and Popper: On Rationality, Economism, and Democracy. Abingdon: Routledge.
• Popper, Karl. 2012. After The Open Society. Abingdon: Routlegde Classics.
• Popper, Karl. 2002a. Conjectures and Refutations. Abingdon: Routledge.
• Popper, Karl. 2002b. The Logic of Scientific Discovery. Abingdon: Routlegde Classics.
• Popper, Karl. 1976. “The Logic of the Social Sciences” In The Positivist Dispute in German Sociology. Translated by. Adey, G. and D. Frisby, 87-104. London: Heinemann, 1976.
• Popper, Karl. 2006. The Myth of the Framework. London & New York: Routledge. Digital Printing.
• Popper, Karl. 2013. The Open Society and its Enemies. Princeton: Princeton University Press,
• Popper, Karl. 1986. The Poverty of Historicism. London: ARK Edition.
• Popper, Karl. 1989. Quantum Theory and the Schism in Physics: From the Postscript to the Logic of Scientific Discovery. Abingdon: Routledge.
Jitka Paitlová (University of West Bohemia, Department of Philosophy, Czechia) Petr Jedlička (University of West Bohemia, Department of Philosophy, Czechia)
Objectivity of science from the perspective of x-phi
ABSTRACT. Objectivity as a prime scientific virtue features prominently among topics in the philosophy of science. It has been one of the defining elements of any inquiry into nature since early modern times, and gained even more significance in the later period, when it acquired its current meaning and became part and parcel of the scientific ethos. A number of authors made seminal contributions to the theoretical understanding of objectivity or created their own versions of the concept, often from diverging perspectives. As a result, the term itself diversified into a family of related concepts, and it also became a matter of contention in a number of debates.
We examine the objectivity problem yet from another perspective. Recently, a new approach was formed that uses empirical and experimental methods as a way of philosophical inquiry (see Knobe 2004, Sosa 2007, Knobe & Nichols 2008). Experimental philosophy or “x-phi” makes extensive use of methods adopted from empirical sciences, in contrast to mostly “speculative” way of traditional philosophical inquiry. In our research, we focus on concept of objectivity and its formation and understanding in the natural sciences. For this purpose, we employ tools taken from experimental philosophy, sociology and cognitive sciences, such as interviews, focus groups, questionnaires and laboratory experiments. Our research is interdisciplinary – we brought together philosophers and sociologists of science with active scientists. So, the philosophical insights are supplemented by expertise from diverse scientific backgrounds and the scientists provide us access to the scientific community.
The research is being carried out in several phases. We have already completed the qualitative empirical part of the study in which we conducted interviews and focus groups with 40+ scientists from various fields and subfields. This method has already brought valuable insights into how precisely objectivity is categorized and operationalized by the scientists themselves (as intersubjectivity, testability, approximation to the truth, precision, impartiality etc.) and also about the contemporary challenges and threats to objectivity (replicability, big data, new technologies and methods, trend-tracking, time pressures, publication overflow etc.). All this provided a better initial insight into the current scientific practices including various kinds of biases.
The next part (which is currently under way) consists from quantitative questionnaire and it will provide more detailed understanding of these topics and reveal the scope of the related issues. We use the conception of “decision vectors” (see Solomon 2001) as biasing factors (social, motivational, cognitive, ideological etc.) that have direct impact on scientific objectivity. However, following Solomon, we approach these factors as epistemically neutral, i.e. they influence the outcome (direction) of decisions in the science while this influence may or may not be conducive to scientific success. We also use experiments analogous to moral dilemmas (see Bonnefon, Sharriff, and Rahwan 2016) that are included in the questionnaire. At the Congress in Prague we would like to present the most important findings from the qualitative phase of the research, coupled with the preliminary results from the quantitative phase.
09:30
Nataliia Reva (Taras Shevchenko National University of Kyiv., Ukraine)
Does Analogical reasoning imply Anthropomorphism?
ABSTRACT. The goal of my talk is to present the part of the big empirical research on the correlation between logical reasoning and cognitive biases I’m working on.
For this presentation, I decided to focus only on analogical thinking to be sure, that I will have time to do the correct results scoring of my hypothesis. I assume that people who are better in analogical reasoning may also be more prone to anthropomorphism. In my view, people who are good at finding logical analogies are more liable to see the features of the living creatures in the inanimate things because both of these tendencies require some imagination and creative thinking.
To check this theory I am working on a survey that will be launched in February.
To test the level of anthropomorphism I will use three different questionnaires that affect different areas of perception. Two of which were created by myself. They focus on visual and auditory perception and do not require deep thinking. While the third one, I have taken from Waytz's, Cadoppo's, and Epley's study (2010), ask some reflections. I have chosen this scale because of its high reliability that was confirmed by different studies before (Cronbach’s Alpha: 0.86).
To verify the logical abilities of our subjects I will ask them to finish the analogical arguments making a conclusion from the two premises. In addition, they will also pass the short verbal and geometrical analogy tests of form A:B:C:?.
As the platform for my survey, I will use the Lime Survey, because it is easy to use as well as comfortable to analyze the data at the end. Besides, in my opinion, it is more cheering for people to answer their questions privately when nobody confuses them with unwelcome attention. This way, I expect to get more honest answers.
The scoring will be done in SPSS.
The control group will be the over 100 students from 16 to 25 years old from my alma mater. I will divide that in two group (50x50): those who have and who have not learned logic or critical thinking before. I know that this decision narrows down the sample. Yet, I think that it will be better to rate the different age groups separately. The objective reasons are that I need to be sure that the subjects are about the same level of education and do not have any mental issues, which could be caused by age, like, for example, a memory loss.
Although the survey will be in Ukrainian, I will translate some example in English for the presentation.
• Bartha P. By parallel reasoning: The construction and evaluation of analogical arguments (2010) New York: Oxford University Press.
• O’Donoghue D., Keane M. A Creative Analogy Machine: Results and Challenges (2012) International Conference on Computational Creativity. pp.17-24
• Waytz A., Cadoppo J., Epley N., Who Sees Human? The Stability and Importance of Individual Differences in Anthropomorphism (2010) Perspectives on Psychological Science, Vol. 5, No. 3, pp.219-232
ABSTRACT. Paradoxes are the unwanted children of philosophy. Whenever a paradoxes are discovered, philosophers immediately start working to identify their cause and make them disappear. For the Liar paradox (commonly formulated as: “This sentence is false”), that work started 2500 years ago and continues to this day. In our work we present an empirical investigation of one of the most popular solutions to this paradox.
In XV-th century Jean Buridan proposed that every sentence in natural language implicitly asserts its own truth (i.e. the virtual entailment principle). Adopting this principle means that the Liar sentence is not paradoxical but false, because its content is contradictory to what is virtually implied. From that, Jean Buridan follows that humans should perceive the Liar sentence the same way as any other false sentence. This solution to the Liar paradox received criticism for making ad hoc claims about natural language. There is no apparent reason to assume that every sentence asserts its own truth, other than to get rid of the paradox. However, thanks to modern advancements in psychophysiology, it became possible to empirically verify if human brain really perceives the Liar sentence like a false sentence.
We designed and conducted an electroencephalographic experiment to examine brain activity during the comprehension of the Liar sentence. We compared it to brain activity during comprehension of true sentences, false sentences and Truthteller sentences (“This sentence is true”). The experiment was conducted according to all the modern standards of neuroimaging. In our paper we minutely describe all the details of experimental procedure, brain activity recording and statistical analysis of the results. Our results show that the human brain processes the Liar sentence identically to false sentences, and that the Truthteller sentence is processed identically to true sentences. This provides evidence for the Buridan's hypothesis and the virtual entailment principle. These results have several implications for contemporary theories of truth and logic.
We demonstrate that it is possible to investigate the inherent “theory of truth” embedded in the human brain and that a theory of truth can have predictive power with regard to physical phenomena. However, we do not demonstrate that the Liar sentence is false. We merely demonstrate that humans perceive the Liar sentence like if it was false. This places our experiment within the relativistic view on truth, where adjectives pertaining to truth-value are assessment-sensitive and an agent is required to perceive them to assess them. Finally, finding empirical evidence for the virtual entailment principle supports the idea that humans think with the logic of truth (a logic for which the truth is a designated value of its adequate semantics). This demonstrates that the conclusions of non-Fregean logics regarding the Liar paradox coincide with human understanding of language.
Truth Lies: Taking Yet Another Look at the Theory-Laden Problem
ABSTRACT. As a point of departure, it would be reasonable to accept that observation is “Theory Laden” and that therefore the idea of objectivity in scientific theory is bogus. The issue of the “Theory-Laden” problem is closely related to almost everything else in the philosophy of science, for it concerns the epistemological understanding of what is scientific knowledge, and it bears upon one of the most central rivalries of the ‘realism-antirealism debate.’
One way to describe this rivalry is by focusing on the concept of truth and its relevance to the understanding of rationality in science. Both the realist protagonist and the antirealist antagonist acknowledge the central place truth has in science and in understanding its rationality. However, many studies in the last few decades were reluctant about the neutrality of this rivalry towards the cultural, social, moral and political aspects of science. Subsequently, there was growing entanglements within the philosophy of science of these aspects. The cultural approach among others relativizes science to these aspects while dispensing altogether with the truth as a critical concept for the understanding of science and its rationality.
Science is just one means among many of gaining knowledge and is the product of a specific knowledge culture, the Western one. Although it has been preferred by governments, industry and others, knowledge is not unique to this culture since there are alternative forms of knowledge and other historical knowledge cultures. Accordingly, science is a 'knowledge culture' in the sense that methodology, research methods, and the facts which are obtained by them, are shaped by culture.
There is no doubt that this shift in the philosophy of science brought many valuable insights into the nature of science. However, there is no satisfactory explanation in philosophical terms to the commitment to truth in science. Although studies in the logic of confirmation and methodology have put an end to the naive notion of truth in science, still modern scientific practice relies virtually on the notions of right or true within the inter-subjective domain. The truism that objectivity is unattainable does not change the fact that, in practice, scientists and researchers adhere to these notions.
I suggest a way in which the cultural approach can be maintained without excluding the concept of truth. The idea is inspired by the atypical treatment of this concept by the French psychoanalyst Jacques Lacan and using his ideas in a more metaphorical way rather than analogous.
According to Lacan, there is an intricate play within the psychoanalysis discourse between truth on the one hand and deception and lies on the other. Psychoanalysis aims to reveal truth, which is inscribed in the deception of the analysand’s speech. The analysand may well think that he or she is telling the truth, but in the context of psychoanalysis, deception and lies are inscribed in the text of truth. Moreover, the analyst knows perfectly well that lies can reveal the truth about desire more eloquently than what the analysand will think as honest or true statements. Thus, truth according to Lacan is in language and speaking rather than part of reality itself.
Accordingly, for the philosophy of science, and within the cultural approach, there is no truth in the scientific research. However, from the point of view of the scientist and within the scientific practice and discourse, truth has and should have a dominant regulative role in directing and executing scientific research. Thus, although the cultural approach forecloses the concept of truth as an explanatory concept in science, truth cannot be dismissed altogether as the raison d'être for the scientific practice from the point of view of the scientists, the scientific community and the scientific discourse.
ABSTRACT. Science sometimes proceeds by straightforward empirical tests in which a concrete hypothesis yields a specific quantitative prediction that can be checked against empirical results. This strategy is ill-suited to epistemic contexts in which so little is known about the phenomenon under investigation that specific hypotheses are not forthcoming. Contemporary research in cosmology on the nature of dark energy is an example of just such a context. I will argue that theorizing about dark energy is constrained by putting bounds derived from empirical results on parameterized families of unspecified models of dark energy. This strategy allows researchers to efficiently prune away large swaths of model space, without having to articulate particular proposals regarding the nature of dark energy. Characterizing and appreciating this strategy is valuable because it allows us to understand one of the variety of ways in which scientists make empirical progress in fields where researchers currently know very little about their targets, and thus precisely where there is much new ground to be gained. Moreover, familiarizing ourselves more intimately with this strategy has the potential for significant impact on the accuracy and sophistication of the public’s understanding of how science works.
The epistemology and methodology of science to which the science-interested public has access rarely escapes the long shadow of the digestible although implausible popular caricature of Popper’s falsificationist philosophy of science. Those physicists who do charitably reference philosophy of science at all, are notorious perpetuators of this mythology about scientific methodology. Even worse, the caricature of Popperian falsifiability is trotted out in public debates as a demarcation criterion precisely when questions about what ought to count as properly scientific methods is a matter of active dispute---that is, precisely when the integrity of science is vulnerable. In cosmology, the question of whether or not “non-empirical confirmation” ought to be recognized as a legitimate aspect of scientific methodology has become a matter of public dispute, or rather a “battle for the heart and soul of physics” according to George Ellis and Joe Silk, who worry that if the battle is lost—if non-empirical confirmation is anointed as scientific—then science will be less well equipped to defend itself from the likes of climate change skeptics. I agree with Ellis and Silk, that something very central and important to the project of scientific inquiry would be lost by endorsing non-empirical confirmation. I also want to stress that it is misguided to defend the special epistemic status of science by relying solely on the Popperian caricature. Therefore, my analysis of contemporary research in dark energy serves two main points, one parochial/methodological and one synoptic/normative. First, cosmologists are learning about the nature of dark energy in a clever way—without hypotheses—and we should update our conception of scientific methods accordingly. Second, we ought to stop defending the integrity of science to the public by appealing to Popper alone, because doing so fails to emphasize what genuinely does differentiate science from other pursuits and incriminates perfectly good cutting-edge empirical research.
Empirical data are often problematic in various different ways. This paper provides a discussion on the epistemic issues modelers face when they generate simulated data in an attempt to solve such problems. We argue that in order to count as epistemically justified and evidentially relevant, a simulation model does not necessarily have to mimic the target of some empirical investigation, and simulated data do not need to be generated by mimicking the data generating processes for empirical data. Simulated data may successfully mimic the target system even if the simulation model does not even aim to mimic it. In such cases simulated data typically improves the representational relation between empirical data and the target.
The fact that a computer simulation can only contain those variables and relationships that the modeler puts into it has been used in arguing for the epistemic superiority of experiments. We show that this very same feature provides some simulation studies with an epistemic edge over experiments.
The origin of empirical data lies in reality such that its production requires causal interaction with measuring or detection devices, or with human sensory capacities, while the origin of simulated data lies in a computer simulation model. A simulation is not in causal interaction with the system that the simulated data is taken to represent, nor with the empirical data if that is what the simulation aims to represent.
When a simulation model aims to represent a real target system, it may do this by representing the data generating process (DGP) responsible for the empirical data concerning that target. Here the aim is to establish a similarity between the simulation model and the target. Alternatively, they may forgo such an attempt, and instead aim to produce a dataset that describes the target better than empirical data. In some such cases the simulated data mimic the target even though the model from which they derive does not.
Simulated data may also aim to mimic empirical data that the target generates without aiming to mimic the target or its data generating process. Furthermore, a simulation may aim to mimic several possible DGPs, and the plural is important here. We believe that such heterogeneity a good enough reason to elevate the study of simulated data into an independent research topic, a topic that can be discussed without concern for the comparison to experiments.
The aim of this paper is to provide a framework for understanding the use of computer simulation in dealing with problems related to empirical data. We discuss cases from several different disciplines, focusing on the reasons for using simulated data and the relevant mimicking relations. The aim is to provide a general framework that might allow us to understand how to enhance the evidential and epistemic relevance of different types and uses of simulated data.
We will also consider cases with hybrid data, where they derive partly from simulation and partly from some other source. Simulated data are thus combined with empirical data, the aim being to obtain a corrected, enhanced, or unbiased dataset as a whole. Such hybrid data are then typically used for different evidential and epistemic purposes instead of empirical data.
Dan Gabriel Simbotin ("Gheorghe Zane" Institute of Economic and Social Research, Romanian Academy, Iasi Branch, Romania)
The Unity of Science: From Epistemic Inertia to Internal Need
ABSTRACT. Science sprang from the spirit of unity, starting from the general idea of knowledge. Despite the fact that it developed until the beginning of modernity, it also preserved the idea of unity. Once with modernity, and the first broad classification achieved by Francis Bacon (1605), followed by the ontological clashes within knowledge, the idea of unity becomes an alternative, while the development of science in its own matrix is seen as a priority. Today, by complex processes of division, hybridization, concatenation, the number of sciences, study fields, and subjects increase exponentially, while the problem of unity is back into discussion.
In this paper we want to analyze whether the question of unity of science, as it is theorized today, is quite a real problem or just an epistemic inertia. In doing this, we will firstly describe the process of the transition from unity to diversity, from antiquity through the Middle Ages up to the beginning of modernity. We will also focus on the turning point in "The Advancement of Learning", arguing why the Baconian perspective was a unitary one, together with the dichotomous development of science in modernity. We will then analyze the internal needs of the interdisciplinary dialogue and how these can be covered by the current theoretical main directions of unity of science: encyclopedias, reductionism and transdisciplinarity. Finally, we will synthesize a methodological vision of interdisciplinary natural communication that can cover the internal need of the sciences to develop into a coherent integrated system and we will support its imperative character, even if apparently from a pragmatic point of view, it does not seems to be a priority.
Selective Bibliography:
Carnap, Rudolf (1938) “Logical Foundations of the Unity of Science”, in O. Neurath, R. Camap, and C. Morris (eds.), International Encyclopedia of Unified Science: Volume I, Chicago: University of Chicago Press, pp. 42-62.
Henriques, Gregg R. (2005) “A New Vision for the Field: Introduction to the Second. Special Issue on the Unified Theory”, Journal of Clinical Psychology, Vol. 61(1), 3–6.
Mittelstrass, Jürgen (2011) “On Transdisciplinarity”, Trames. A Journal of the Humanities and Social Sciences, 15(65/60), 4, pp. 329–338.
Nicolescu, Basarab (2002) Manifesto of Transdisciplinarity, New York: State University of New York.
Oppenheim, Paul and Putnam, Hilary (1958) “The unity of science as a working hypothesis”, in H. Feigl et al. (eds.), Minnesota Studies in the Philosophy of Science, vol. 2, Minneapolis: Minnesota University Press pp. 3-36.
Medieval Debates over the Infinite as Motivation for Pluralism
ABSTRACT. A flourishing area of debate between medieval philosophers of the thirteenth and fourteenth centuries focused on paradoxes of the infinite and whether or not any notion of the actual infinite was permissible within natural science and mathematics. The fourteenth century in particular brought about a variety of views that attempted to provide both space for some form of the actual infinite and for rules that govern comparisons of size between infinities, such as the views of Henry of Harclay and Gregory of Rimini (Murdoch 1982). It has been suggested recently that these views provide historical evidence that there are natural intuitions behind comparing the size of infinite sets of natural numbers and against the inevitability of the Cantorian notion of cardinality (Mancosu 2009). In this paper I take this claim a step further and suggest that, upon charitable analysis of the medieval positions, one should not only accept that Cantorian notions were not historically inevitable, but one should take a pluralist attitude towards views of the infinite, with respect both to rules that govern comparisons of size for actual infinities and views that accept more restricted notions of the potential infinite.
To defend this view, I first set the medieval context by defending an interpretation of the relation between the Aristotelian view of the potential infinite and the medieval logical distinction between syncategorematic and categorematic uses of ‘infinite.’ I then look at a sample of views within two debates where the infinite plays a key role: (i) debates over the eternity of the world (and the type of infinity associated with an infinite past), and (ii) debates over the composition of continua (and the type of infinity associated with the parts of continua). In each debate, disagreements focus on which, if any, actual infinities are permissible in natural science, and how to avoid the corresponding paradoxes of the infinite. Background assumptions from theology, mathematics, metaphysics, and physics all play a role in the range of medieval arguments, and I extract from these a set of historically representative natural intuitions about the infinite. While these background assumptions might be incompatible, I argue that one should not rule them out based on their success or failure in the development of later mathematical views of the infinite. Rather, one can find in them reasonable epistemological and metaphysical criteria that can motivate multiple different philosophical views of the infinite, which naturally leads to a pluralist view of the infinite.
REFERENCES
Mancosu, Paolo (2009). “Measuring the Size of Infinite Collections of Natural Numbers: Was Cantor’s Theory of Infinite Number Inevitable?: Measuring the Size of Infinite Collections of Natural Numbers”. In: Review of Symbolic Logic 2.4, pp. 612–646.
Murdoch, John E. (1982). “Infinity and Continuity”. In: Cambridge History of Later Medieval
Philosophy. Ed. by Norman Kretzmann, Anthony Kenny, and Jan Pinborg. Cambridge, pp. 564–
91.
10:00
K. Brad Wray (Centre for Science Studies, Denmark)
Setting Limits to Chang's Pluralism
ABSTRACT. Hasok Chang has raised some serious questions about the extent to which there needs to be consensus in science. At its core, Chang’s project is normative. He argues that there are many benefits to pluralism. I argue that Chang’s pluralism overlooks the importance of consensus in science. I advance an account of consensus in science that will aid us in better understanding the constructive roles and limitations of pluralism in science.
Chang argues that progress in science has been undermined and developments have been delayed because scientists have frequently been too quick to abandon a theory for a new competitor theory (see Chang 2012). Chang illustrates this with a number of case studies, including what he regards as the too-hasty abandonment of the phlogiston theory in the late 18th Century. He maintains that some insights contained in the phlogiston theory needed to be rediscovered years later.
Chang argues that we need to foster pluralism in science. The type of pluralism that Chang defends is broader than theoretical pluralism. Chang is concerned with what he calls “systems of practice” (Chang 2012, 253). He presents two families of arguments in defence of his pluralism: arguments that appeal to the benefits of tolerance, and arguments that appeal to the benefits of interaction between competing theories or scientific practices. Chang explicitly relates his defense of pluralism to Thomas Kuhn’s endorsement of theoretical monism (see Chang 2012, 258).
I argue that Kuhn’s account of science allows for a certain degree of pluralism. Kuhn came to argue that the locus of consensus in a specialty community is the scientific lexicon that scientists employ in their research. A scientific speciality community is characterized by its use of a scientific vocabulary that those working in the specialty employ in their research. Kuhn was quite insistent that the scientists in a specialty must share a lexicon. The lexicon is what makes effective communication possible.
Such a consensus, though, is compatible with significant differences between the scientists working in a specialty. In fact, such differences play an integral part in Kuhn’s account of scientific change. Kuhn was insistent that exemplars are open to multiple interpretations. It is their flexibility, their ability to be applied in a variety of different ways to different problems, which make them so useful to scientists in their normal research activities. Second, Kuhn argued that the values that scientists appeal to when they are choosing between competing theories are also open to multiple interpretations (Kuhn 1977). Again, this flexibility was crucial, according to Kuhn. It was the key to understanding the success of science.
I argue that lexical monism is compatible with Chang’s pluralism of systems of practice.
References
Chang, H. 2012. Is Water H2O? Dordrecht: Springer.
Kuhn, T. S. 1962/2012. Structure of Scientific Revolutions, 4th Edition. Chicago: University of Chicago Press.
Kuhn, T. S. 1977. The Essential Tension. Chicago: University of Chicago Press.
Kuhn, T. S. 2000. The Road since Structure, Conant and Haugeland, (eds.). Chicago: University of Chicago Press.
ABSTRACT. In his famous essay "Logic as a Calculus and Logic as a Language", van Heijenoort (1967) describes a widespread picture of logic which is called the "Universalistic Conception of Logic". According to him, this picture emerges from Frege’s and Russell’s writings as opposed to the Algebra of Logic, represented by Boole, Löwenheim, etc. A formal system is "universal" iff (1) quantifiers range over unrestricted variables; (2) the universe consists of all that exists and it is fixed; and (3) there is not an external standpoint, so nothing can be said outside the system. From these premises, van Heijenoort concludes that no metatheoretical question could be meaningfully posed by Frege and Russell, on account of their conception of Logic.
This interpretation of logicism has been developed by young students who attended, during the 1960s, to Dreben’s lectures and seminars in Harvard. Among others, stand out Goldfarb (1979, 2001 and 2005) and Ricketts (1985 and 1986). In their opinion, logicism and metatheory were mutually exclusive. However, several commentators have recently pointed out the necessity of revisiting Frege’s and Russell’s contributions to provide a more balanced interpretation. Tappenden (1997), Proops (2007) and Heck (2010) rejected this account of logicism by adducing new textual evidence against the Universalistic Conception of Logic.
In this talk, I will critically examine the claims (1)-(3) in the light of Russell’s conception of Logic. First of all, I will discuss Russell’s ontology to assess the plausibility of (1). Secondly, I will argue that (2) is not a sufficient condition for the impossibility of metatheory, since some logicians have posed metatheoretical questions in a fixed domain (for example, Carnap and Tarski). Finally, I will show that (3) is false, because Russell seemed to be worried about certain metatheoretical issues in his Principia Mathematica. Furthermore, he was fully aware of Bernays’s investigations on independence, so there was no intrinsic incompatibility between these kind of results and his conception of Logic.
References:
Goldfarb, W. 1979. "Logic in the twenties: The nature of the quantifier". Journal of Symbolic Logic, 44: 351–368.
Goldfarb, W. 2001. "Frege’s conception of logic". Reprinted in Potter, M. and Ricketts, T. (Eds.) 2010. pp. 63–85.
Goldfarb, W. 2005. "On Gödel’s way: the influence of Rudolf Carnap". Bulletin of
Symbolic Logic, 11(2): 185–193.
Heck, R. 2010. "Frege and semantics". In Potter, M. and Ricketts, T. (Eds.). 2010.
pp. 342–378.
Potter, M. and Ricketts, T. (Eds.). 2010. The Cambridge Companion to Frege. Cambridge: Cambridge University Press.
Proops, I. 2007. "Russell and the Universalist Conception of Logic". Noûs, 41(1): 1–32.
Ricketts, T. 1985. "Frege, the Tractatus, and the Logocentric Predicament". Noûs, 19(1): 3–15.
Ricketts, T. 1986. "Objectivity and Objecthood: Frege’s Metaphysics of Judgment".
In Haaperanta, L. and Hintikka, J. (Eds.). 1986. pp. 65–95.
Tappenden, J. 1997. "Metatheory and mathematical practice in Frege". Philosophical
Topics, 25: 213–264.
van Heijenoort, J. 1967. "Logic as calculus and logic as language". Synthese, 17:
324–330.
Towards a New Philosophical Perspective on Hermann Weyl’s Turn to Intuitionism
ABSTRACT. Hermann Weyl’s engagement with Intuitionistic-based ideas can be ascribed to as early as 1910, when he started to consider constructive methods as a viable opponent to classical mathematics (Feferman 1998, 2000). The publication of “Das Kontinuum” (Weyl 1918) marked the high point of his intensive commitment to constructivism, which then matured into an intuitionistic approach in his renown paper “On the New Foundational Crisis in Mathematics” (Weyl 1921), up until the mid 1920’s when Weyl has retracted from intuitionism with a rapprochement to Hilbert’s axiomatic program (Beisswanger 1965).
From an historical perspective, Weyl is habitually described as a “wanderer” (Scholz 2004), traveling amid mathematical approaches as well as through philosophical fields, differing with the change in his scientific views. Most of the literature written about Weyl’s ‘conversion’ to intuitionism and vice-versa has offered causal explanations for his change of heart, such as philosophical stances adopted by Weyl from which he drew motivation to pose new scientific questions or to redirect his research orientation (van Dalen 1995). Such historical accounts, more often than not, assign to Weyl’s frequent change of mind an aspect of confusion deriving from his indecisiveness (Scholz 2000, Feferman 1988). However, these explanations have yet been able to account for the reasons behind Weyl’s indecisive acts, as well as his inability to undoubtfully convince other members from the scientific community. In order to comprehend the whole picture, we need to consider the way scientific transitions occur in general, and more specifically, how practitioners exchange ideas within the scientific trading zone (Galison 1997, Collins, Evans and Gorman 2007).
In the following paper I aim to introduce a different perspective, suggesting that Weyl’s feeling of ambivalence is the product of a rational process of self-criticism. Building on and away from Menachem Fisch’s model describing the intrapersonal process of becoming ambivalent towards a theory or a concept (Fisch 2010, 2011, 2017), I intend to bring forth the incentive Weyl had for considering new transformative ideas (both philosophical and mathematical) in the first place, arguing that he wasn’t a “fallen victim to a “blindfolding” feeling of evidence during his Fichtean and Brouwerian phase” (Scholz 2000, pp. 16) but a reasonable and active participant in an intrasubjective process of self-critical deliberation.
REFERENCES
Beisswanger, Peter. 1965. "The Phases in Hermann Weyl's Assessment of Mathematics," Mathematic-Physical Semester Reports vol. 12 pp. 132-156.
Benbaji Yitzhak, and Fisch Menachem. 2011. The View from Within: Normativity and the Limits of Self-Criticism. Notre Dame, IN: University of Notre Dame Press.
Collins, Harry M., Robert Evans, and Michael Gorman 2007. "Trading Zones and Interactional Expertise," Studies in History and Philosophy of Science, 38: 4, 657
Feferman, Solomon. 1988. “Weyl vindicated: Das Kontinuum 70 years later”, In Temi eprospettive della logica e della scienza contemporanee, vol. I, CLUEB, Bologna, 59-93; reprinted as Chapter 13 in Feferman 1998.
–––. 1998. In the Light of Logic. New York: Oxford University Press.
–––. 2000. “The Significance of Weyl’s Das Kontinuum” In: Hendricks V.F., Pedersen S.A., Jørgensen K.F. (eds) Proof Theory. Synthese Library (Studies in Epistemology, Logic, Methodology, and Philosophy of Science), vol 292. Springer, Dordrecht pp. 179-194.
Fisch, Menachem. 2010. “Towards a History and Philosophy of Scientific Agency” in: Monist 93 (4) pp. 518-544.
–––. 2017. Creatively Undecided: Toward a History and Philosophy of Scientific Agency. The University of Chicago Press.
Galison, Peter. 1997. Image and Logic: A Material Culture of Microphysics, Chicago and London: The University of Chicago Press.
Mancosu, Paolo. 1998. From Brouwer to Hilbert: The Debate on the Foundations of Mathematics in the 1920’s. Oxford University Press.
Scholz, Erhard. 2000. “Herman Weyl on the Concept of Continuum”, In: Hendricks V.F., Pedersen S.A., Jørgensen K.F. (eds) Proof Theory. Synthese Library (Studies in Epistemology, Logic, Methodology, and Philosophy of Science), vol 292. Springer, Dordrecht pp. 195-220.
–––. 2004. “Philosophy as a cultural resource and medium of reflection for Hermann Weyl” published in eprint, arXiv:math/0409596.
van Dalen, Dirk. 1995. “Hermann Weyl's Intuitionistic Mathematics”, The Bulletin of Symbolic Logic, vol. 1, no. 2, pp. 145–169.
Weyl, Herman. 1918. The Continuum: A Critical Examination of the Foundation of Analysis, translated by Stephen Pollard and Thomas Bole, Thomas Jefferson University Press: 1987.
–––. 1921. “On the New Foundational Crisis in Mathematics” In Mancosu 1998, pp. 86-118.
–––. 1946. “Mathematics and Logic”. A brief survey serving as a preface to a review of "The Philosophy of Bertrand Russell", American Mathematical Monthly, vol. 53, pp. 2-13.
10:00
Jean-Yves Beziau (Universidade Federal do Rio de Janeiro, Brazil)
Tarski’s two notions of consequence
ABSTRACT. In the 1930s Tarski introduced two notions of consequence. We present and compare them. We also examine their relations with similar notions presented by others.
The first notion is connected with the consequence operator theory and was presented in (Tarski 1928, 1930a,b). The second one is based on the concept of model and was presented in (Tarski 1936a,b,c, 1937).
We argue that it is misleading to understand and qualify the first as a syntactic or proof-theoretical notion. A more appropriate qualification is “abstract consequence”. The word “abstract” has indeed been later on used by Suszko for his theory of abstract logics, a continuation of the consequence operator theory (see Brown and Suszko, 1973).
Regarding the second notion, we point out that besides Bolzano, already notified by Scholz (1937), other people had similar ideas, in particular Abu'l-Barakāt (see Hodges 2018) and Wittgenstein (1921, 5.11). And we compare this notion, in particular using (Corcoran-Sagüillo 2011), with the one later on developed in model theory by Tarski himself (1954-1955).
We discuss the relations between the two notions, emphasizing that the model-theoretical one is a particular case of the consequence operator theory one, and discussing fundamental features of them that can be used to prove a general completeness theorem, following the line of Gentzen’s work (1932) about Hertz’s Satzysteme (1929), framework connected with the consequence operator.
References
J.Brown et Roman Suszko, 1973, “Abstract logics”, Dissertationes Mathematicae, 102, 9- 41.
J.Corcoran and J.M. Sagüillo, 2011, “The absence of multiple universes of discourse in the 1936 Tarski consequence-definition paper”, History and Philosophy of Logic, 32, pp.359-374.
G.Gentzen, 1932, “Über die Existenz unabhangiger Axiomensysteme zu unendlichen Satzsystemen”, Matematische Annalen, 107, 329-350.
W.Hodges, 2018, “Two early Arabic applications of model-theoretic consequence”, Logica Universalis, 12, 37-54.
H.Scholz, 1937, “Die Wissenschaftslehre Bolzanos. Eine Jahrhundert– Betrachtung”, Abhandlungen der Fries'schen Schule, 6, 399-472.
A.Tarski, 1928, “Remarques sur les notions fondamentales de la méthodologie des mathématiques”, Annales de la Société Polonaise de Mathématique, 7, 270-272.
A.Tarski, 1930a, “Über einige fundamenten Begriffe der Metamathematik”, C. R. Soc. Sc. et Lett. de Varsovie XXIII, Classe III, 23, 22-29
A.Tarski, 1930b, “Fundamentale Begriffe der Methodologie der deduktiven Wissenschaften. I”, Monatshefte für Mathematik und Physik, 37, 361-404.
A.Tarski, 1936a, “Über den Begriff der logischen Folgerung”, Actes du Congrès International de Philosophie Scientifique, vol.7, Hermann, Paris, pp.1–11.
A.Tarski, 1936c, O Logice Matematycznej i Metodzie Dedukcyjnej, Atlas, Lvov-Warsaw. 1936. Eng. ed., Introduction to Logic and to the Methodology of Deductive Sciences, OUP, Oxford, 1941.
A.Tarski, 1937, “Sur la méthode déductive”, Travaux du IXe Congrès International de Philosophie, vol.6, Hermann, Paris, pp.95-103.
A.Tarski, 2003, “On the concept of following logically”, Translated from the Polish and German by M.Stroińska and D.Hitchcock, History and Philosophy of Logic 23, 155-196.
A.Tarski, 1954-55, “Contributions to the theory of models. I, II, III”, Indigationes Mathematicae, 16, 572-588; 17, 56-64.
L.Wittgenstein, 1921, “Logisch-Philosophische Abhandlung”, Annalen der Naturphilosophie, 14, 185-262.
Haocheng Fu (Department of Philosophy, Chinese Culture University, Taiwan)
Iterated belief revision and DP postulates
ABSTRACT. It is well-known that the AGM theory is for one-step revision and does not provide a suitable account for iterated belief revision. Darwiche and Peral proposed some postulates (DP postulates, for short) to deal with the possible modes of a sequence of new information rather than a single one. But it surprised us that DP postulates conflict with AGM postulates in some aspects, for example, Lehmann pointed out one of the DP postulates, say DP2 conflicts with AGM postulates. Furthermore, Nayak et al. proved that the problem remains even though the DP2 has been weakened to the form as x is consistent and x is the logical consequence of y then the result of belief state K revised by x then revised by y would be identical with K revised by y alone. In order to block the inconsistency between DP postulates and AGM postulates, Nayak et al. proposed an additional postulate, i.e. Conjunction Postulate to dissolve the conflict. Unfortunately, though Nayak et al. are correct to show that DP postulates are too permissive, the Conjunctive Postulate they proposed is too radical on the other hand. In this paper, I will examine the strategy proposed by Jin and Thielscher and provide a different way to reconcile the DP postulates and AGM postulates.
09:30
Nadiia Kozachenko (Kryvyi Rih State Pedagogical University, Ukraine)
Critical thinking and doxastic commitments
ABSTRACT. Following Krister Segerberg (“Belief revision and doxastic commitment”. Bulletin of the Section of Logic, 27(1–2), 43–45.) we consider the doxastic commitments of an agent as a subset the initial belief set. Segerberg considers a “complex” to be a pair (V, T) where V and T are theories in some given language and with respect to some given logic L such that V ⊆ T. The only primitive operation on complexes is the operation of revision ∗. Informally, T represent a belief set and V – a set of doxastic commitments. In Segerberg’s presentation doxastic commitments are treated as irrevocable under a revision operator. Thus, they remain stable and secure an epistemic fallback — the set by which one can always return the initial consistent belief set if needed. By using fallbacks one can define a specific operation of “irrevocable revision” over pairs (V, T). A sentence p being added to a belief set by the irrevocable revision cannot be removed and gets a knowledge status. Since the elements of a fallback have a knowledge status, they are subject to the “persistence property”: Kp → [∗q]Kp. In this sense the doxastic commitments always remain in the fallback. In particular, the sets of logical tautologies, and of the fundamental axioms for T always belong to V.
Thus, every pair (V, T) is ordered under irrevocability and their components are not equivalent by default. Moreover, changes in belief set Т are accomplished by the rules of set V, and thus, V can be considered a set of pure epistemic commitments which holds for a strictly organized theory and idealized rational agent.
By considering the agent’s ability to critical thinking one can relate it to the taken doxastic commitments. Taking into account that critical thinking normally (1) relies upon criteria, (2) is self-correcting, and (3) sensitive to context (Lipman, M. “Critical thinking – What can it be?”. Educational Leadership, 1988, 46(1), 38-43.), one can observe that the set V as a set of the pure epistemic commitments is subject only to (1). To ensure compliance with (2) and (3) one should be able to model a dynamic character of the revision procedures. To this effect one should provide a certain feedback between Т and V. This can be achieved by explicitly introducing doxastic component (as a context sensitivity) in the commitment set and specify certain interrelations between this component and the belief set.
In this way one can discriminate between “universal commitments” and “contextual commitments” by extending Segerberg’s construction by an additional set C, and considering a complex to be a structure (V, C, T), where C is the set of contextual commitments. The complex can be then ordered by the revision operation. If some belief set C ⊆ T turns out to be successful in the sense that any execution of a revision operation under T saves the set C, then C can be added to the commitment set V. Thus, as soon as V is subject to the irrevocable revision it can be considered to be a set of doxastic commitments, although it may give rise to several independent fallbacks.
ABSTRACT. Very often, our reasonings about the world and about ourselves explicitly involve references to perspectives or points of view. We reason about how some facts become the contents of certain perspectives, about how those contents are related with the perspectives that allow access to them, about how those contents are related with other contents, etc. There are some inferential patterns articulating those reasonings. And despite having an enormous methodological importance, especially in the field of social sciences, but not only, those reasonings have barely been the subject of a deep philosophical analysis.
One possible strategy to understand that sort of reasoning is to use the notion of possible worlds. When an epistemic agent does not have complete knowledge about some area, she considers a number of possible worlds as candidates for the way the world actually is. If p holds at all the worlds that the agent considers candidates, then she can be said to know p. Reasoning about perspectives would deal with facts that may appear only in some of the worlds the agent considers candidates, with facts that perhaps do not appear in any of those worlds and with the attitudes agents can have toward all those facts. However, in order to explain why an agent focuses on some of those facts, but not on other ones, maintaining the attitudes she maintains, we need something more than the notion of possible worlds. Perspectives have to be taken as indexes of evaluation different from possible worlds. There are some important logical approaches that suggest that. This is the case of Antti Hautamäki and Steven Hales. The combination of the usual modal operators with the new operators opens an interesting range of possibilities.
Our approach is compatible with these proposals. But it tries to specify the concrete inferential patterns that articulate our reasoning about perspectives. Moreover, perhaps a "logic" about these sorts of reasonings is not strictly possible. Perhaps only a pragmatic and methodological approach is possible. In any case, our objective is to identify and analyze a relevant set of inferential patterns, assuming the distinction established by Gilbert Harman between reasoning and argument. His aim was to study a sort of reasoning that produces not only changes in belief but also changes in attitudes and plans of action, a sort of reasoning that is at the same time theoretical and practical. In our contribution, we will analyze some very important inferential patterns of our reasoning about perspectives. More concretely, we will identify and analyze in detail a set of ten rules. And we will discuss some relevant connections between our approach and the classic work of Harman.
Fagin, R., J. Halpem, Y. Moses, and M. Vardi (2003) Reasonig about Knowledge, Cambridge, MIT Press.
Hales, S. (1997) “A Consistent Relativism”, Mind, 196, 421, 33-52.
Hautamäki, A. (1986), Points of View and their Logical Analysis, Acta Philosophical Fennica, 41.
Harman, G. (1986) Change in View, Cambridge, MIT Press.
The symposium will consist of two talks that provide introductions to two areas of logic where medieval Arabic-speaking logicians made advances. One of these, presented by Wilfrid Hodges, is the use of diagrams for solving logical problems; it was only recently realised that this was achieved both accurately and insightfully in twelfth-century Baghdad. The other, presented by Saloua Chatti, is hypothetical logic, a distinctive branch of formal logic of great interest to several leading Arabic logicians, with some features of propositional logic and some of temporal logic.
ABSTRACT. In this talk, I will present the hypothetical logics of the authors pertaining to the Arabic tradition, namely al-Fārābī, Avicenna, Averroes and some later logicians, especially in the Western area of the Arabic Medieval World (mainly North Africa). The main problems that I raise are the following: what is the hypothetical logic about? How is it viewed by the Arabic logicians, whether in early Medieval times (8th to 12th centuries) or in later ones (14th to 15th centuries)? What are the main theories and results provided by these authors?
To answer these questions, I will study the early Arabic authors such as al-Fārābī and Avicenna but also some later authors in order to see how this particular logical system has developed in the writings of the various Arabic authors.
I will show that in Averroes’ frame the hypothetical logic, which studies the propositions containing ‘if...then’ (the conditional) and ‘either...or’ (the disjunction), is seen as really secondary, since he provides only the most basic hypothetical syllogisms, although he does use it in his proofs by reductio ad absurdum, while in al-Fārābī’s frame, it is given a little more importance and developed in various ways. But it is in Avicenna’s frame that it is studied at the greatest length and given the most importance, since Avicenna introduces quantification in this field and presents several systems of hypothetical logic, having distinct features. In later times, the hypothetical logic is given much importance in particular in the Western Arabic areas (North Africa) with the writings of Ibn ‛Arafa and Senoussi, among others, who seem influenced by Avicenna and especially his follower al-Khūnajī.
I will thus present (briefly) the basic features and rules of these systems, and will compare between them in order to show the main differences in the analysis of these connectives and the evolution of this field in the Arabic tradition. I will also analyse these rules in the light of the modern classical formalism of propositional and predicate logics.
Abū al-Barakāt and his 12th century logic diagrams
ABSTRACT. Abū al-Barakāt, a Jewish scholar in twelfth century Baghdad, described a radical new way of handling Aristotle’s categorical syllogisms, using pictorial diagrams instead of Aristotle’s proof theory. In fact his diagrams form a decision procedure for syllogisms. In the West essentially the same discovery was made by Gergonne in the first half of the nineteenth century—though Barakāt’s diagrams are also loosely related to the logical diagrams suggested by Leibniz, Euler and Venn. Like Leibniz, though five hundred years earlier than him, Barakāt used horizontal lines instead of circles. But unlike Leibniz, Barakāt’s use of diagrams was model-theoretic rather than proof-theoretic; this confirms Barakāt’s reputation as one of the most original minds of his time. His diagrams were widely misunderstood and misreported, though fortunately at least one accurate manuscript survives.
Wilfrid Hodges, ‘Two early Arabic applications of model-theoretic consequence’, Logica Universalis 12 (1-2) (2018) 37-54.
Some Sixty or More Primordial Matters: Chemical Ontology and the Periodicity of the Chemical Elements
ABSTRACT. Accounts on the periodic system often draw attention to how two of its main discoverers had contrasting views on the nature of chemical elements. Where Julius Lothar Meyer saw it likely that the elements were comprised of the same primordial matter (Meyer, 1870, 358; 1888, 133), Dmitrii Ivanovich Mendeleev opposed to this view. Instead, Mendeleev argued that each element was its distinct, individual, autonomous entity, and he discouraged from making representations of periodicity that suggested otherwise (1905, 22-24).
Following Andrea Woody’s rich article on the law of periodicity as a theoretical practice (2014), this paper explores how Meyer’s and Mendeleev’s ontological views on primordial matter shaped their ideas on how to represent periodicity. I start by showing how Meyer’s views on the nature of the elements were not an endorsement of the truth of the hypothesis on the primordial matter. Instead, for Meyer, taking the view on board was needed for conducting further investigations on the relationship between atomic weight and other properties of elements. With respect to Mendeleev, I show how his metaphysical views on nature of elements influenced his evaluation of other investigators’ representations of periodicity. I argue that especially Mendeleev’s rejection of graphs (Bensuade-Vincent, 2001) and equations for representing periodicity is in part explained by his views on the nature of the elements. From the many attempts of rendering periodicity to more mathematical language, I especially focus on the equations created by the Russian political philosopher and lawyer Boris N. Chicherin. After doing so, I show that Mendeleev’s ontological views influenced his rejection of Chicherin’s equations.
The examples of Meyer and Mendeleev show that their ontological commitments directed both their own representations of periodicity and their evaluations of other investigators’ representations. Even though we are warned not to confuse means of representation with what is being represented (French, 2010), the case of Meyer and Mendeleev suggests that ontological views on the nature of elements influenced representing periodicity.
French, S. (2010). Keeping quiet on the ontology of models. Synthese, 2(172), 231–249. http://doi.org/DOI 10.1007/s11229-009-9504-1
Bensaude-Vincent, B. (2001). Graphic Representations of the Periodic System of Chemical Elements. In U. Klein (Ed.), Tools and Modes of Representation in the Laboratory Sciences (pp. 117–132). Dordrecht: Kluwer Academic Publishers.
Mendeléeff, D. I. (1905). Principles of Chemistry. (7th ed.). London: Longmans, Green, And Co.
Meyer, L. (1870). Die Natur der chemischen Elemente als Function ihrer Atomgewichte. In F. Wöhler, J. Liebig, & H. Kopp (Eds.), Annalen der Chemie und Pharmacie, VII. Supplementband. (pp. 354–363). Leipzig und Heidelberg.
Meyer, L. (1888). Modern theories of chemistry.. London: Longmans, Green, and Co.
Woody, A. (2014). Chemistry’s Periodic Law: Rethinking Representation and Explanation After the Turn to Practice. In L. Soler, S. Zwart, M. Lynch, & V. Israel-Jost (Eds.), Science After the Practice Turn in the Philosophy, History, and Social Studies of Science (pp. 123–150). New York: Routledge.
The building blocks of matter: The chemical element in 18th and 19th -century views of composition
ABSTRACT. Currently, the IUPAC (1997) holds a double definition of the chemical element. These definitions loosely correspond to Lavoisier’s and Mendeleev’s respective definitions of the element: whereas Lavoisier (1743-1794) defined the element as a simple body, thus provisionally identifying all indecomposable substances as the chemical elements, Mendeleev (1834-1907) distinguished between elements and simple bodies. He reserved the term ‘element’ for the invisible material ingredient of matter, detectable only through its atomic weight, and not isolable in itself.
Today, philosophers of chemistry generally agree that two meanings of the term ‘element’ coexist, and that this leads to confusion. In order to study the nature of the chemical element, philosophers often refer to Lavoisier’s and Mendeleev’s views as illustrations of the two meanings. Thus, their definitions are analysed individually as well as compared to each other, independently of their historical context. This reinforces the idea that Mendeleev’s definition marks a rupture in the history of the chemical element: it is presented as the return to a pre-existing metaphysical view (Scerri 2007, p. 114-116; Ghibaudi et al. 2013, p. 1627) or the establishment of a new concept of element (Bensaude-Vincent 1986, p.12). However, little is known about the evolution of the concept of chemical element during the early 19th century: where did the change in definition between Lavoisier and Mendeleev come from?
The aim of this paper is to historicise the notion of chemical element, and study its development in the context of 18 th and 19th-century chemistry. Based on the works of Chang (2011, 2012), Klein (1994, 2001, 2003), and Siegfried (2002), I will argue that the change in definition does not in itself constitute a rupture in the history of the chemical element; rather, it is part of a broader evolution of chemical practice which connects the two definitions through a continuous transfer of ideas.
Indeed, a view very similar to Mendeleev’s was already present in early 19th-century chemical atomism. The ‘theory of chemical portions’, identified by Klein (2001 p. 15-17, 2003 ch. 1), transformed the stoichiometric proportions in which elements combined into an intrinsic quality of the elements: it “identified invisible portions of chemical elements […] as carriers of the theoretical combining weights” (Klein 2001, p. 15). This theory in turn overlaps with Daltonian atomism, which constituted the height of ‘compositionism’ (Chang 2011). Compositionism was based on the assumption that chemical composition consisted of a rearrangement of stable building blocks of matter. This view was dominant in the 18th century and played a crucial role in Lavoisier’s chemical revolution (Chang 2011, 2012 p. 37-41, p.135; Siegfried 2002; Klein 1994).
Thus, through a historical analysis this paper will identify the continuity between the views of Lavoisier and Mendeleev. In doing so, it will provide an example of how historical thinking can shed a new light on chemical ontology. Perhaps, a better understanding of the historical constitution of the chemical element will show the contingency of the current double definition, and thus help resolve the question of the nature of chemical element today.
References
Bensaude-Vincent, Bernadette (1986), "Mendeleev’s Periodic System of Chemical Elements", The British Journal for the History of Science, 19(1): 3-17.
Chang, Hasok (2011), “Compositionism as a Dominant Way of Knowing in Modern Chemistry”, History of Science, 49: 247-268.
Chang, Hasok (2012), Is Water H2O? Evidence, Realism and Pluralism, Dordrecht: Springer.
Ghibaudi, Elena, Regis, Alberto, and Roletto, Ezio (2013), “What Do Chemists Mean When They Talk About Elements?”, Journal of Chemical Education, 90: 1626-1631.
IUPAC (1997). Compendium of Chemical Terminology, 2nd ed. (the Gold Book). Compiled by A. D. McNaught and A. Wilkinson. Oxford: Blackwell Scientific Publications. XML on-line corrected version: http://goldbook.iupac.org (2006-).
Klein, Ursula (1994), “Origin of the Concept of Chemical Compound”, Science in Context, 7(2): 163-204.
Klein, Ursula (2001), “Berzelian Formulas as Paper Tools in Early Nineteenth-Century Chemistry”, Foundations of Chemistry 3: 7–32.
Klein, Ursula (2003). Experiments, models, paper tools: cultures of organic chemistry in the nineteenth century. Stanford, California: Stanford University Press.
Scerri, Eric (2007), The Periodic Table: Its Story and Its Significance, Oxford: Oxford University Press.
Siegfried, Robert (2002) From Elements to Atoms: a History of Chemical Composition, Philadelphia: American Philosophical Society.
ABSTRACT. Recently, the term "fake news" has become ubiquitous in political and public discourse. Despite its omnipresence, however, it is anything but clear what fake news is. An adequate and comprehensive definition of fake news is called for. We provide a systematic account of fake news that makes the phenomenon tangible and rehabilitates the use of the term. Against the background of this account, we set fake news apart from related phenomena, such as journalistic errors, selective and grossly negligent reporting, satire, propaganda, and conspiracy theories.
ABSTRACT. On our view, fake news are news reports lacking on two dimensions: they are false or misleading (thus lacking truth) and they are circulated by people with an intention to deceive or a bullshit attitude (thus lacking truthfulness). Our definition is not only extensionally adequate, but can also contribute to improve the public debate. A special merit is that the definition lays open how fake news cause epistemic problems for societies, since truth and truthfulness are the foundations of knowledge, trust, and deliberation. Finally, we argue that our definition is superior to others that have recently been proposed by Matthew Dentith, Axel Gelfert, Nikil Mukerji, and Regina Rini.
ABSTRACT. It is commonplace in the educational literature on mathematical practice to argue for a general conclusion from isolated quotations from famous mathematicians. We will critique this mode of inference.
The issue can be illustrated by, for example, the way philosophers have written about properties of proofs such as elegance and explanatory power. Much of this literature assumes that there is a consensus among mathematicians about which proofs are elegant or explanatory. Recently, Matthew Inglis and Andrew Aberdein subjected this assumption to an empirical test. They used a proof of the Sylvester-Gallai theorem taken from Aigner & Ziegler (2000) and asked mathematicians to judge the accuracy of twenty adjectives that might describe the proof. The mathematicians in this sample differed significantly among themselves on the qualities of this particular proof, for reasons that are not revealed by the data.
In another study with a similar design (Inglis, Mejia-Ramos, Weber & Alcock 2013), it was found that the mathematicians who responded did not agree when asked whether a given proof was valid, and those who judged it to be invalid gave three different reasons why it was not valid. In the same vein, a study by Weber and Mejia-Ramos (2019) shows that mathematicians disagree about exactly which inferences in a given proof in real analysis are rigorous.
These results from cognitive psychology concern the private reasoning of individual mathematicians in isolation. Mathematical practices are shared, however, and can be viewed as social patterns of behaviour that arise from interaction among mathematicians. That is not to say that interaction always leads to a single shared view of best mathematical practices; it often leads to starkly opposed camps. Therefore, to investigate the differences of opinion among mathematicians on the aspects of mathematical practice that interest philosophers and educators, we will also consider public disagreements on the relevant issues among mathematicians, whether they are expressing personal opinions or those of a defined like-minded group.
We will examine the career of one much cited and anthologised paper, WP Thurston’s ‘On Proof and Progress in Mathematics’ (1994). This paper has been multiply anthologised and cited hundreds of times in educational and philosophical argument. Examination of this case will illuminate the use to which testimony from mathematicians has been put and the conditions that have been, or should have been, placed on making inferences from it.
The interesting question is not whether mathematicians disagree—they are human, so of course they do. The question is whether there is a researchable object, even as an ideal type, answering to the expression ‘mathematical practice’, to which testimony from mathematicians might give us access. We will argue that debate about the nature and purpose of mathematics (such as the Jaffe-Quinn debate that Thurston contributed to) is itself part of the practice. This conception allows us to retain mathematical testimony as a resource, while indicating how to use it critically. The paper will end with reflections on the usefulness of quotations from research mathematicians for mathematical education.
ABSTRACT. In studying mathematical practice, one area of great importance is identifying the cognitive processes involved in the different mathematical tasks that human agents engage in. A particularly interesting issue in this regard is the complexity of those processes. The prevalent paradigm of modelling the complexity of mathematical tasks relies on computational complexity theory, in which complexity is measured through the resources (time, space) taken by a Turing machine to carry out the task.
This reliance on computational complexity theory, however, can be a problematic fit with mathematical practice. The complexity measures used are asymptotic, meaning that they describe the complexity of functions as the arguments approach infinity (or a particular value). The mathematical tasks that human agents are involved with, however, always concern only finite, mostly relatively small inputs. In this talk, I will argue that starting from simple mathematical tasks, the human performance is not always accurately characterized by the asymptotic complexity measures. Based on empirical data on mental arithmetic, I will show that we need to rethink the complexity measures in terms of the different stages involved in tasks of mental arithmetic.
In addition, it is problematic that in computational complexity theory, the complexity of mathematical problems is characterized by optimal algorithms for solving them, i.e., Turing machines that take the least amount of time or space to reach the solution. In mathematical practice, there are many aspects which can make human algorithms for solving problems computationally suboptimal, yet still cognitively advantageous. In this talk, I will focus on two such aspects:
constructing diagrams and the spatial manipulation of symbols. In terms of computational complexity, drawing a diagram can needlessly add to the complexity of the task. However, it can be an integral heuristic tool for human agents to grasp the solution to a mathematical problem. Similarly, the spatial manipulation of symbols can make an important difference for human agents while making the solution computationally suboptimal.
Why is this the case? Why do we use computationally suboptimal problem-solving
strategies? I will argue that this depends on the way we are enculturated in mathematics. From our number systems to visual presentations and physical tools, the way we practice mathematics is determined in a crucial way by the mathematical culture we are located in. This, in turn, determines (at least partly) which problem-solving algorithms are optimal for us. While computational complexity theory can make an important contribution to studying such humanly optimal algorithms, it is important to establish that there are many potential differences between computationally optimal and humanly optimal algorithms.
ABSTRACT. Computers are impacting mathematical research practice in diverse and profound ways. The use of computers has clearly transformed practices connected to typesetting and communication, but computers have also – although less visibly – influenced practices connected to proofs and formulation of theorems; since the 1970s, computers have been important in running extensive searches as parts of proofs of mathematical theorems, and in the past 20 years two new and very different research programs concerning the use of computers in mathematics have emerged, one centered on the use of computers as aids in formal proof processes and another investigating exploratory and experimental uses of computers. Despite these efforts, the construction of mathematical proofs is still mainly a human activity; mathematical creativity has not been mechanized, neither in proof construction nor in hypothesis formulation. This leads to the following question: What parts of human mathematical practice cannot easily be automated, and why? Or on a more positive and pragmatic note: How can human and machine practice be integrated fruitfully in the field of mathematics?
In this paper we will address these two questions by comparing automated and interactive
theorem-provers with the practice of human research mathematicians. We will do so by drawing on the results of recent qualitative studies (conducted by ourselves as well as other researchers) investigating how research mathematicians choose mathematical problems and how they attack and work with the problems they have chosen. Especially, we will look at the roles played by embodied cognition (broadly conceived), representational tools and mathematicians’ understanding of social norms and values in the mathematical practice. Furthermore, we will describe the basic ideas behind the current computer tools for formal proof construction (i.e. automated and interactive theorem-provers) and discuss how and to what extent the construction of such tools can be informed by the insights into the practice of research mathematicians such that the strengths and weaknesses of the automated system are best matched by the strengths and weaknesses of the human mathematician.
Theo Kuipers (University of Groningen, Netherlands)
CANCELLED: Inductively approaching a probabilistic truth and a deterministic truth, the latter in comparison with approaching it in a qualitative sense.
ABSTRACT. Theories of truth approximation almost always deal with approaching deterministic truths, either actual or nomic, and have a Popperian background. E.g. Nomic truth approximation revisited (Kuipers, 2019), henceforth NTAR, deals primarily with qualitatively approaching deterministic nomic truths, based on the hypothetico-deductive method.
In contrast, approaching probabilistic truths can naturally be based on inductive logic or inductive probability theory (Kuipers, 1978). Assume e.g. random sampling with replacement in a universe conceptualized in a finite partition. The primary problem of verisimilitude is the logical problem of finding an optimal definition. In the present context this amounts to an optimal definition of the distance between any probability distribution and the true distribution. There are some plausible standard measures. However, the epistemic problem of verisimilitude is at least as interesting: what is a plausible distribution to start with, and how to update it in the light of empirical evidence and non-empirical characteristics such that convergence to the true distribution takes place.
Carnapian systems converge to the corresponding true probabilities, the probabilistic truth, in an inductive probabilistic way, starting from equal probabilities or, in generalized form, starting from some other well-argued prior distribution. Hintikka systems add to this the inductive probabilistic convergence to the true constituent, i.e. the deterministic truth about which conceptual possibilities actually exist (or are nomically possible), starting from some well-argued prior distribution over the constituents. Hence, if applied in the random sampling context, both types of systems can be reconstructed as inductively approaching a probabilistic truth and, in the Hintikka-case, in addition inductively approaching a deterministic truth.
The plausible connecting question is whether Hintikka systems can in addition be conceived as extensions or concretizations of qualitatively approaching the deterministic truth? Let U be the set of conceptual possibilities and T the true subset of instantiated conceptual possibilities and CT the corresponding true constituent. Let TWVU with corresponding constituents CT, CW, and CV. Everybody will agree that CW is closer to the truth CT than CV. According to NTAR the (qualitative) ‘success theorem’ says that, CW will be at least as successful as CV if ‘at least as successful’ is defined in a certain qualitative way. By analogy, in the sketched probabilistic context it holds, under some plausible parameter conditions, that CW is ‘probabilistically at least as successful’ as CV in the sense that p(en|CW) p(en|CV), where en is any possible sequence of ‘outcomes’ of n random drawings, i.e. any en belonging to the Cartesian product Tn. The remaining question is whether ‘probabilistically at least as successful’ can be seen as a concretization of ‘qualitatively at least as successful’.
ABSTRACT. There are distinctively social aspects to learning. Not only do we learn from others -- as when we learn from our parents or our teachers -- we also often can acquire new knowledge only if we collaborate with others, for instance, as members of the same laboratory or research group.
Once the communal aspects of knowledge acquisition are recognized, it becomes natural to ask how we might best organize a group of agents bent on a joint research goal, supposing we want to optimize the group's performance. This is a broad question, which could concern topics ranging from after-work socializing to decisions about how to allocate grant money or other resources within a group. The present paper, instead, focuses on some of the most fundamental epistemic mechanisms that groups might decide to put in place, or might be encouraged to adopt, for how their members ought to act and especially to interact. In particular, it focuses on mechanisms for how group members should respond to, or 'update on', new evidence concerning probabilistic hypotheses while also being informed about the changing belief states of others in the group, who they know to acquire evidence pertaining to a common source.
The notion of optimizing group performance will also be understood in a strictly epistemic sense, that is, as relating to the question of how to get at the truth of whatever issue the group is working on. While truth is generally regarded as the overarching goal of science, it is equally acknowledged that science serves many practical purposes, too. For that reason, we are often forced to make a speed-accuracy trade-off: we do want to get at the truth, but we also want to get there fast, which may require that we settle on quickly becoming highly confident in the truth (even if we cannot completely rule out alternative hypotheses) or quickly getting close to the truth.
Thus, the question to be investigated is how members of a research group should update on the receipt of new evidence in a social setting, where they also have access to relevant beliefs of (some or all of) their colleagues, supposing the group wants to strike the best balance between speed (getting at the truth fast) and accuracy (minimizing error rates). The main methodological tool to be used is that of computational agent-based modelling. Specifically, we build on the well-known Hegselmann-Krause (HK) model for studying opinion dynamics in groups of interacting agents focused on a common research question, where in our case thhis question involves probabilistic truths.
12:00
Graham Oddie (University of Colorado Boulder, United States)
Credal accuracy in an indeterministic universe
ABSTRACT. The truthlikeness program was originally focused on the explication of the notion of propositional accuracy. Its aim was to characterize an ordering of propositions with respect to closeness to a target proposition—the truth of some matter. The comparison class of propositions is determined by some question under investigation; each proposition in the comparison class is an answer to the question—either partial or complete—and is assumed to be true or false; candidates for the target are the complete answers. An ordering would yield a plausible ranking of the cognitive value of credal states. The simplest kind of credal state—belief in, or acceptance of a proposition—can be represented by the object of that belief. The cognitive value of accepting a proposition would be greater the closer that proposition is to the target. The epistemic utility program is after an ordering of probabilistic credal states with respect to a target state. In an earlier article I showed that these two programs, at least as currently formulated, make incompatible demands. A core desideratum in the truthlikeness program is a principle called proximity, while a core desideratum in the epistemic utility program is propriety. Proximity and propriety cannot be combined. For the most part the targets in the epistemic utility program are taken to be opinionated states, corresponding to complete propositional answers. But the truth that is the target of an inquiry may itself be probabilistic—for example, it may consist of objective propensities. Richard Pettigrew, placing constraints on Bregman divergences, has zeroed in on the Euclidean measure of distance between probability distributions as the solution in the general case. That measure yields the standard Brier measure in the deterministic or opinionated case. This measure, along with a host of others, ignore the metric structure of the logical space, and this is what lies at the root of the violation of proximity. In this paper I identify what I argue is the most promising measure of distance in the general probabilistic case—the Wasserstein, or earth-moving, measure—which in turn yields both the correct notion of credal accuracy in the opinionated case, and propositional accuracy, as special cases.
References:
Kuipers, Theo A. F. [2009]: Empirical progress and truth approximation by the 'hypothetico-probabilistic method'. Erkenntnis 70 (3):313 - 330.
Oddie, Graham, [2017]: “What accuracy could not be”. The British Journal for the Philosophy of Science, axx032, https://doi.org/10.1093/bjps/axx032 (published online, 01 September 2017).
Pettigrew, Richard, [2015]: Accuracy and the Laws of Credence. Oxford: Oxford University Press.
ABSTRACT. A mathematical proof may be complex and hard to follow.
Which aspect of the proof causes the hardness is subjective.
It may use unfamiliar notions or esoteric logic, or it may
just assume too much background knowledge. It may also be quite
long and even if it is parsed into many simple steps, or many
simple cases to consider separately, one may get lost.
Proof Complexity is a mathematical area which links the intuitive
notion of complexity of proofs with mathematical notions from
computational complexity theory. A proof is complex if it requires
a lot of time to verify using some predefined formal criteria.
It is one of the richest areas connecting mathematical logic with
computational complexity theory. This connection has many facets
and I will try to explain some of them. In particular, I shall
consider the fundamental question of whether the complexity of
computations and the complexity of proofs are essentially different
phenomena or whether one can be in some sense reduced to the other.
Methodological individualism and holism in the social sciences
ABSTRACT. The debate of individualism and holism has been a central issue in the philosophy of social science for decades. The paper will focus on methodological individualism and holism in the social sciences. On the one hand, methodological individualism (thereafter MI), especially the method of game theory, is dominant in social inquiries; on the other hand, many schools such as network theory, structural sociology, sociological realism, and neofunctionalism in sociology insist methodological holism (thereafter MH).
Main debates between MI and MH includes the dispensability debate (whether holist explanations are indispensable in social sciences) and the microfoundations debate (whether holist explanations may stand on their own or should always be supplemented by the underlying individual-level micro foundations).
Borrowing researches in the philosophy of mind, Keith Sawyer proposes nonreductive individualism (NRI) which accepts that only individuals exist, but rejects methodological individualism. Using arguments from the philosophy of biology, the author tries to revisit the dispensability and microfoundation arguments by analyzing social causation, translatability of social concepts, intertheoretic reduction.
Firstly, social causations supervene on individual level causations, just like James S. Coleman’s boat model, because the latter may falsify the former but not visa versa. Secondly, according to Philip Kitcher’s multiple realizability argument in the philosophy of biology, the many-many relationship between social and individual concepts may support the untranslatability of social concepts. Lastly, since approximations are so common in social sciences as well as in biological sciences, , if there is non-singular limit as Robert Batterman describes, some social theories cannot be derived from individual level theories.
The author suggests ontological individualism but methodological pluralism which tries to integrate MI and MH. Ontologically, the social phenomena express causal power through the individual level, so the microfoundations are always helpful to understand and study social phenomena. But epistemologically, it is often difficult to translate and derive, because of multi-realization and approximation problem, so we must accept epistemic nonreductionism of social theories and social causation.
Can we apply the science/technology distinction to the Social Sciences? A brief analysis of the question
ABSTRACT. The interrelationship between society and technology has been a recurring topic of study not only for those commitment to the field of science and technology studies, but also to social scientists (from sociology to management studies). From the research in those fields arise an issue that has not been address properly yet; at least, and as far as I know, in an explicit nor in a deep way. Nor in the disciplines cited before neither in the philosophy of science and technology. That issue is the possibility of talking about “social technologies”, in the sense that there are some fields in the realm of social sciences whose knowledge -although is referred to social phenomena- can be better understood if it is considered – epistemologically- as technical knowledge.
I this talk, I address the problem of applying the (philosophical) distinction of science and technology to the disciplines that study (or that deals with) social phenomena. Though this topic has not been studied in-depth yet, there have been a few scholars that have tried to address it, whose arguments I shall discuss.
First, I will expose the science/technology distinction based on Mario Bunge’s approach (a framework of that he calls hylorealism). Also, I shall address the general discussion on this topic in the field of philosophy of science and technology. Second, I will exhibit the arguments of those researchers who consider that it is possible to talk about technological disciplines in the fields that deal with the social world. I shall discuss then the “sociotechnology” (Mario Bunge) and the “social technology” (Olaf Helmer) approaches, besides the works of other scholars that, although they don’t address this issue specifically, have considered the possibility of a technological discipline of the social.
References:
Bunge, Mario. 1983. Treatise on Basic Philosophy. Volume 6: Epistemology and Methodology II: Understanding the World. Dordrecht: Reidel.
———. 1985. Treatise on Basic Philosophy. Volume 7: Epistemology and Methodology III: Philosophy of Science and Technology. Part II. Life Science, Social Science and Technology. Dordrecht: Reidel.
———. 1997. Social Science under Debate: A Philosophical Perspective. Toronto: University of Toronto Press.
Helmer, Olaf. 1966. Social Technology. New York: Basic Books Inc.
Helmer, Olaf, and Nicholas Rescher. 1959. “On the Epistemology of the Inexact Sciences.” Management Science 6 (1): 25–52.
Pineda-Henao, Elkin Fabriany, and Carlos Tello-Castrillón. 2018. “¿Ciencia , Técnica y Arte?: Análisis Crítico Sobre Algunas Posturas Del Problema Del Estatus Epistemológico de La Administración.” Revista LOGOS CIENCIA & TECNOLOGÍA 10 (4): 112–30.
Quintanilla, Miguel Ángel. 2005. Tecnología: Un Enfoque Filosófico. México D.F.: Fondo de Cultura Económica.
Seni, Dan Alexander. 1990. “The Sociotechnology of Sociotechnical Systems: Elements of a Theory of Plans.” In Studies on Mario Bunge’s Treatise, 429–52. Amsterdam-Atlanta: Rodopi.
12:00
Andrey Orekhov (Peoples" Friendship University of Russia, Russia)
CMW-revolution in Social Sciences as a Type of “Scientific Revolution”
ABSTRACT. T. Kuhn defined “scientific revolution” as the transition from one paradigm to another paradigm. I.Lakatos revised this conception and discussed on the struggle of several “research programs” inside every scientific discipline, - in-stead of full domination only of one program. But both these researchers proved their theories mainly on the basis of natural science. But how do we analyze the situation in social sciences concerning “scientific revolutions”? Russian scholar V.Stepin discussed on three paradigms in social sciences: 1) classical paradigm; 2) not-classical paradigm; 3) post-non-classical paradigm. Therefore, we have here two scientific revolutions: a “not-classical” revolution and a “post-non-classical” revolution. But we do not agree with his opinion. From our point of view, there were two paradigms in social sciences: “natural-istic” and “oecumenical”, and they are divided with CMW-revolution. CMW should be decoded as “Comte Marx Weber”. This revolution really took place in the second half of XIX and in the beginning of XX centuries. What was essence of this revolution?
Firstly, August Comte formulated a positivist paradigm; this paradigm became a basis for development of all social sciences in XIX-XX and beginning of XXI centuries. But, for instance, humanitarian sciences (cultural science, philology, literary criticism) have rejected this paradigm, and this became a ground for birth of humanitarian sciences.
Secondly, under this revolution social knowledge separate from humani-tarian knowledge. At the beginning Neo-Kantianism gave a powerful impulse for this process, and Max Weber finally completed it.
And, at last, thirdly, social sciences utterly and finally have got ideologi-cal measurement; ideology intersperses in them as integral part, and finally this fact was proved by Karl Marx.
“Naturalistic” period in the development of social sciences, - before CMW-revolution, - could be characterized as absolute imitation of natural sci-ences (especially physics) in methodology and theory, indissolubility of social and humanitarian knowledge, absence of ideological measurement in the most of theoretical problems of social knowledge.
“Oecumenical” period in the development of social sciences, - after CMW-revolution, - lasts up to contemporary times. This period is character-ized with the next features: dissolubility of social and humanitarian knowledge, creation of its own “Universe” (“Oekumena”) of social knowledge, different from “Universe” of knowledge of natural science, “idelogization” of any piece of social knowledge.
Feyerabend and the Reception and Development of Logical Empiricism
ABSTRACT. As is well known, in his early intellectual trajectory, Feyerabend mounted a sustained assault against logical empiricism. More precisely, in the second half of the 1950s Feyerabend found fault with Carnap’s criterion of empirical meaningfulness whereas in the first half of the 1960s he repeatedly targeted Nagel’s account of reduction and Hempel’s account of explanation. The aim of this paper is to examine the far less-known responses of Hempel and Nagel to Feyerabend’s criticism. This will allow to get a better view both of Feyerabend’s reception of logical empiricism and of the impact that Feyerabend’s stimuli had on its later development.
In a series of papers published between 1962 and 1966, Feyerabend deployed a two-pronged attack on the ‘orthodox’ accounts of reduction and explanation, focusing on the descriptive adequacy and on the normative desirability of two basic principles or conditions that Feyerabend identified as underlying Nagel’s and Hempel’s proposals: (a) the deducibility, or at least the consistency, between the reducing and the reduced theories and between the explaining law or theory and the description of the phenomenon to be explained; and (b) the even more fundamental meaning invariance between the main descriptive terms of the theoretical frameworks involved in reduction or explanation. Feyerabend claimed that Hempel’s and Nagel’s accounts failed as their assumptions were both inaccurate with respect to the history of science, as they do not reflect actual scientific practice, and objectionable with a view to the progress of scientific knowledge, as they would promote a conservative approach favouring traditionally well-established and entrenched theories as against novel ones.
Feyerabend’s vocal criticism shook North American philosophy of science, then largely dominated by logical empiricism, and prompted Hempel’s and Nagel’s replies, which appeared in print in the second half of the 1960s and in the early 1970s. Hempel and Nagel readily conceded some of Feyerabend’s points, such as the unsoundness and undesirability of the methodological rules exposed by Feyerabend; only to retort that these did not warrant Feyerabend’s most radical conclusions. Hempel clarified that Feyerabend’s methodological analysis was ‘completely mistaken’ and that Feyerabend offered ‘no support’ for his allegations; whereas Nagel was downright dismissive about what he considered Feyerabend’s ‘absurd remarks’ and ‘deliberate distortions’. Indeed, it seems that, although Feyerabend was steeped in the relevant literature, he substantially misunderstood the logical empiricist research programme, which, as emphasized by recent scholarship, revolved around a conception of philosophy of science as conceptual engineering and the ideal of explication. On the other hand, Hempel also recognised that Feyerabend’s insistence on the theory-ladenness affecting the meaning of any descriptive term, despite having been long acknowledged by logical empiricism, could have more far-reaching semantic and methodological consequences than previously envisaged and that these deserved to be thoroughly discussed. In this respect, Hempel’s later and more sophisticated attempts to provide a rational reconstruction of the structure of scientific theories can be seen at least partly as the result of Feyerabend’s challenging insights.
Feyerabend's Well-Ordered Science: How an Anarchist Distributes Funds
ABSTRACT. To anyone vaguely aware of Feyerabend, the title of this paper seems oxymoronic. For Feyerabend, it is often thought, science is an anarchic practice with no discernible structure. Against this trend, I argue that Feyerabend’s pluralism, once suitably modified, provides a plausible account of how to organize science which is superior to contemporary accounts.
Ever since the foundation of the National Science Foundation in 1950, there has been little philosophical analysis of how resources should be distributed amongst diverse and, often, competing research programs. In Science, Truth, and Democracy, Kitcher introduces the notion of a ‘well-ordered science’ where he provides his understanding of how science should be organized. In a nutshell, he posits that democratic deliberations should determine which theories are pursued and how they are prioritized. Since then, others have introduced more fine-grained models that, unwittingly, make use of Kitcher’s conception of a well-ordered science (Strevens 2003; Weisberg and Muldoon 2008; Zollman 2010). However, these models conflate the goals of research and the means of attaining those goals. This conflation is the result of assuming that the goals, plus our current scientific knowledge, determines the means of attaining them. For example, if a cure for cancer is a goal of research, we should increase funds for lines of research that seem ‘promising’ for finding such a cure where the ‘promise’ comes from our existing knowledge of cancer and its possible treatments (e.g., various subfields of oncology, radiation therapy, etc.). Against this, I argue that Feyerabend was correct in asserting that we should pursue theories that contradict currently accepted knowledge and appear to have no initial practical value. Therefore, the attainment of the goal (a cure for cancer) also requires funding research that conflicts with current background knowledge (e.g., music therapy) and research that appears to have nothing to do with cancer research by our current lights. In my talk, I will reconstruct the methodological argument Feyerabend provides for this view and show how it supported by the social scientific literature on theory pursuit which shows how solutions to problems came from unexpected sources (Roberts 1989; Foster & Ford 2003; McBirnie 2008).
After this, I go on to show how Feyerabend’s pluralism can provide an alternative method of organizing research. Feyerabend’s pluralism is essentially the combination of the principle of proliferation, which asserts that we should proliferate theories that contradict existing theories, and the principle of tenacity, which asserts that we can pursue theories despite their theoretical and empirical difficulties indefinitely (Shaw 2017). However, Feyerabend provides no means for balancing these principles and, therefore, his own well-ordered science is incomplete. I argue that this balance can come from C.S. Peirce’s ‘economics of discovery’ which provides limits to the principles of proliferation and tenacity (cf. McKaughan 2008). I conclude by gesturing at recent work on the economics of theory pursuit that provides empirical confirmation of this view (Stephan 2012).
Vitaly Pronskikh (Fermi National Accelerator Laboratory, United States) Kaja Damnjanović (Laboratory for Experimental Psychology, Faculty of Philosophy, University of Belgrade, Serbia) Polina Petruhina (Faculty of Philosophy, Lomonosov Moscow State University, Russia) Arpita Roy (Max Planck Institute for the Study of Religious and Ethnic Diversity, Göttingen, Germany) Vlasta Sikimić (Institute for Philosophy, Faculty of Philosophy, University of Belgrade, Serbia)
Are in-depth interviews a must for ethnography of HEP labs?
ABSTRACT. We discuss preliminary results of an empirical, ethnographic study of an international high-energy physics (HEP) laboratory, the Joint Institute for Nuclear Research in Dubna, Russia, conducted in summer 2018. The overall aim of the project was a comparative study of answers explicitly given by the interviewees in response to survey questions and answers implied to same questions the in-depth interviews. We deployed two contrasting research methods: structured surveys, using the set of OPTIMIST group questions [1], and in-depth interviews, drawing on a broad questionnaire. In most cases, the in-depth interviews were executed subsequent upon and complementary to survey questions, while in a few scenarios, interviews with informants were organized independently as well. In the survey stage, interviewees were asked questions regarding their freedoms and opportunities at work, for example, how they used their judgment to problem solving. The interviewees were requested to rank their answers on the Likert scale (from 1 to 7).
In the second stage of interviews, scientists were asked general questions regarding their role and professional identity in the project, the structure of their working day, the hierarchies in the group and division of labor. Depending on their answers, follow-up questions were asked regarding particularities of their duties and attitudes in relation to others in the project. Each interview was conducted by a philosophy student to avoid biases stemming from the interviewers’ professional identity. Scientific staff and visitors to JINR from Russia, Belarus, Poland, Romania, Bulgaria, Cuba, and Chile have been surveyed and interviewed, with 15 interviews recorded and analyzed in all.
We found in our analysis that the two strategies, when used separately, lead to contradictory results since informants usually do not reflect upon survey questions deeply. In addition, they face considerable difficulty in responding to “philosophical” questions in general. An example can help here. On the survey question pertaining to the amount of freedom they can exercise, most participants chose the highest grade. However, their implied grade for the same question elicited from the in-depth interviews, in which some of the scientists admitted that there were budgetary constraints, limited freedom in the choice and use of equipment as well as that approaches to scientific work were in general conventional and repetitive, was much lower.
We propose to discuss these findings in a comparative perspective, and suggest that in an ethnographic study of a laboratory, mixed-methods which combine quantitative and qualitative approach are more efficient, i.e. “philosophical” questions have to be posed and elicited by experts in the course of face-to-face interviews with scientists, rather than obtaining their views in written surveys.
Romain Sauzet (Archives Poincaré - Nancy (Université de Lorraine)., France)
Cognitive and epistemic features: a tool to identify technological knowledge
ABSTRACT. Is what engineers and applied scientists are looking for in their inquiries a specific kind of knowledge? Is there something as a genuine technological knowledge, distinct from an epistemic knowledge? At least three kind of answers have been made to address this issue:
- indistinction between them: from the general analysis of techno science (all knowledge are determined by technological instruments) to a radical pragmatism (all knowledge can be reduced in fine to action);
- emancipation of the technological knowledge: the scientific knowledge, based on the contemplative idea of a true belief with reasons, is distinct from the technological knowledge, based on prescriptive rules (Bunge 1964);
- distinction between degrees: there is something slightly distinct between these two forms of knowledge, but it is rather in varying degrees between common features (e.g. Meijers & Kroes 2013).
In this presentation, I propose to continue the “distinction between degrees” approach, by mobilising concepts from the debates around epistemic values. I defend the following thesis: this distinction between technological and epistemic knowledge must not be drawn between two kinds of knowledge, but rather between epistemic (linked with contemplative knowledge) and cognitive (values unnecessary but linked to epistemic elements, such as fruitfulness, breadth, etc.) features . This perspective is grounded in the fruitful distinction launched by Laudan 1994 and developed by Douglas 2013 between cognitive and epistemic values, which sheds lights of epistemological differences that lead to the choice of theories, concepts, laws, etc.
Together with this theoretical clarification, an empirical survey into the field of nanotechnology allows me to describe more closely the importance and the diversities of cognitive features such as control of a phenomenon, characterization of a surface, functionalisation of a material, etc. Indeed, such objectives cannot be considered as epistemic in a narrow sense (one can control something without knowing it) but they cannot be considered as well as purely applied objectives (they cannot be reduced to practical features). Analysing them as cognitive features can be an interesting way to avoid both the radical distinction of two forms of knowledge and the blurring of these distinctions.
References
- Bunge, M. 1964. Technology as applied science. Technology and Culture 7: 329–349.
- Douglas, Heather, 2013. ''The value of cognitive values", Philosophy of Science, Num. 80, p. 796 – 806.
- Laudan, Larry, 2004. “The Epistemic, the Cognitive, and the Social.” In Science, Values, and Objectivity, ed. Peter Machamer and Gereon Wolters, 14–23. Pittsburgh: University of Pittsburgh Press.
- Meijers, A. W. M., & Kroes, P. A., 2013. “Extending the scope of the theory of knowledge”, in deVries, M. J., de, Hansson S. O., & Meijers, A. W. M. (eds.), Norms in technology, Dordrecht: Springer, p. 15-34.
ABSTRACT. Introduction. Scientific communication problem statement acquires special significance in the context of the competitive struggle of scientific and unscientific knowledge. The complexity of a scientific language contributes to the emergence of various communication problems in interdisciplinary and transdisciplinary aspects. Scientific communication is not limited by the function of information transfer, it provides a solution to the problem of new knowledge legitimizing in science, validity recognition of the theory or research results. The analysis of the intrascientific legitimation of knowledge requires a new arrangement of accents in traditional ideas about the immanent logic of knowledge development and its substantiation, their shift to the mechanisms of epistemic and social structures convergence.
Methods. Communication approaches of scientific interactions study problems are based on the ideas of critical theory (Adorno), poststructuralism (Foucault, Bourdieu), social epistemology (Latour and Bloor), and study the processes of scientific communication exteriorization. Also, authors use scientometric approach.
Discussion. Members of scientific community perceive themselves and their colleagues as sole holders of responsibility for system development and its transmition from teachers to students and followers. T. Kuhn considered the communication in such groups being complete and professional judgments being relatively unanimous. However, since the attention of scientific communities is concentrated on different subject areas, the communication between separate scientific groups is difficult.
Approaches to the study of scientific communication can be divided into two types: the normative approach and the descriptive approach. The normative approach assumes not a description of the studied phenomenon, as with the descriptive one, but the provision of recommendations on how the system should be arranged and how it should function. It is the development of a logical etalon language capable to dispense researchers with the necessity of solving many problems arising from the imperfections of an ordinary language. The lapidary language of logic is only suitable for interaction within the disciplinary areas, it fails at the level of interdisciplinary interactions.
This problem is solved by the second approach to the language of science and scientific communication - the descriptive approach. It involves an assessment-descriptive method of research aimed at empirical research and description of the behavior of individuals and groups in the decision-making process, and began to be actively developed in the second half of the 20th century. Then there appeared such a discipline as scientometrics, which is the application of the cybernetics principles to the study of science. Within the framework of the descriptive approach, all processes of scientific information interaction rely on a communication scheme proposed by Shannon which consists components: information source, message, transmitter, signal, channel, noise, receiver, etc. This model of communication and statistical research methods allow scientometrics to combine the analysis of the growth of publications number with the analysis of their content, channels of scientific communication (scientific journals), and a system of bibliographic references.
Historically, the scientific community was originally formed as a closed culture, inaccessible to the masses. While attempting to study the internal structure of the scientific community and their motives, the researchers faced the so-called "LaPierre paradox" - the discrepancy between the attitudes of the scientist and his real actions. In modern times the connection between science and society has increased significantly. We are witnessing a process of mutual integration of the two structures. If we consider the influence of society on science, then science is understood not as an objective form of knowledge, the emphasis shifts to the bias of scientific knowledge, to the involvement of science in social and political relations. The approach to scientific communication, which combines critical and scientometric approaches, is based on semiotics. Representatives of this direction have built models of the communication process, which are easily adapted for the scientific communication analysis. If we take classical communication model and consider an act of scientific communication consisting of an addressee, addresser, context, contact, message, noise, and code, we can explain and identify difficulties encountered in the process of scientific communication that could not be explained in terms of logical-normative communication model. These are difficulties caused primarily by the increasing role of interdisciplinary research and the increased flow of information in general.
Conclusion. Thus, problems arising in the process of science interaction both within and outside the scientific community are solved with the help of various communicative approaches to the study of science communication.
ABSTRACT. Computer Simulation has been treated as a type of thought experiment (Di Paolo et al. (2000), experiment (Humphreys 1990, 2004; Barberousse et al. 2009; Morrison 2009, 2015; Morgan 2003; Parker 2009), and argument (Bisbert 2012, Beisbert and Norton). Consequently, the epistemic status of CS is weighed according to this ontological classification. For instance, if CS is an argument, then the strength of that argument gives the measure of the epistemic weight of a CS. Such classifications presuppose a neat demarcation between theory and experiment in the practice. The consequences of this assumption are reflected in the treatment of evidential relations by at CS. Contrary to the neat ontological classifications of CS, Peter Galison (2011) has noted the untidy relations between theory and experiment in CS. He employed the idea of ‘trading zones’ to explain the epistemic activities with CSs. Following Galison, this paper attempts to do two things. First, it shows the epistemic consequences of the various ontological characterization of CS. For this purpose, I invoke the traditional distinctions among data, phenomena, and evidence. Employing this distinctions, I show that no ontology of CS alone can adequately account for employment of CS as evidentially relevant scientific practice. In the second part, I discuss Galisons’ notion of trading zone and demonstrate the construction of evidence in a trading zone. This will eventually show that CS practice betrays philosopher’s urge for neat categories like theory and experiment. It will also show the inadequacy of neat ontological categorization in explaining and epistemically vetting the various features of CS like epistemic opacity. I will discuss a CS, published by Agrawal and Babu (2018), about the movement of microorganism in a biomotility mixture as a test case.
The paper is structured as follows. In the first section, I review the various ontological characterization of CS and present the data, phenomena, and evidence distinction. The section two will present the case study and identify the data, phenomena, and evidence distinction in it. Third section will show the limitation of the ontological characterization by applying them to the present test-case and the evidential relation drawn in the case study. Section four will discuss the idea of trading zone and explain how evidential relations in CSs can be characterized employing this concept. Fifth session will discuss the epistemic strategies within a ‘trading zone’ by referring to the case study. This will be further used to demonstrate the inadequacies of the neat ontological characterization of CS to account for the epistemic practice. I will then conclude the paper by noting some possible objections as well as the limitations of the approach developed in the paper.
Reference
Agrawal, Adyant and Sujin Babu. (2018). Self-organisation in a biomotility mixture of model swimmers, Physical Review E 97
Barberousse, A., Franceschelli, S., & Imbert, C. (2009). Computer simulations as experiments. Synthese, 169, 557–574
Beisbart, C. (2012). How can computer simulations produce new knowledge? European Journal for Phi losophy of Science, 2, 395–434.
Beisbart, C., & Norton, J. D. (2012). Why monte carlo simulations are inferences and not experiments. International Studies in the Philosophy of Science, 26, 403–422.
Di Paolo, E., Noble, J., & Bullock, S. (2000). Simulation models as opaque thought experiment. In M. A. Bedau, J. S. McCaskill, N. H. Packard, & S. Rasmussen (Eds.), Artificial life VII: The seventh international conference on the simulation and synthesis of international conference on the simulation and synthesis of living systems (pp. 497–506). Cambridge, MA: MIT Press.
Galison, P. (2011). “Computer Simulations and the Trading Zone.” In From Science to Computational Science, edited by Gabriele Gramelsberger, 118-157. Zürich: Diaphanes.
Guala, F. (2002). Models, simulations, and experiments. In L. Magnani & N. Nersessian (Eds.), Model-based reasoning: science, technology, values (pp. 59–74). New York: Kluwer
Humphreys, P. (1990). Computer simulations. In A. Fine, M. Forbes, & L. Wessels (Eds.), PSA: Pro ceedings of the biennial meeting of the philosophy of science association 1990 (pp. 497–506). Dordrecht: Reidel.
Humphreys, P. (2004). Extending ourselves. Computational science, empiricism, and scientific method. Oxford: Oxford University Press.
Morgan, M. (2003). Experiments without material intervention: model experiments, virtual experiments and virtually experiments. In H. Radder (Ed.) The philosophy of scientific experimentation (pp. 216–235). Pittsburgh: University of Pittsburgh Press.
Morrison, M. (2009). Models, measurement and computer simulation: the changing face of experimentation. Philosophical Studies, 143(1), 33–57.
Parker, W.S. (2009). Does matter really matter? Computer simulations, experiments, and materiality. Synthese, 169(3), 483–496.
11:30
Elisángela Ramírez-Cámara (Instituto de Investigaciones Filosóficas, Universidad Nacional Autónoma de México, Mexico)
Is biased information ever useful (in the philosophy of science)?
ABSTRACT. In this paper, we introduce a two-stage approach for picking out biased, yet useful information. On the first stage, the information is categorized by taking into consideration how biased the information is. The categorization is done in terms of confirmation bias, a generic label that groups a number of vices that include, but are not limited to omission, distortion, and lack of independence of the theory informing the interpretation and the theory seeking confirmation. This allows the user to build a negative filter, one that, for instance, will rule out everything but those pieces of information that exhibit at most one specific bias from the list above.
On the second stage, the information goes through a second positive filter. This filter is also built from a set of criteria, except this time, the items are epistemic virtues, and the goal is to use the filter to find the information worth keeping (despite its being somewhat biased). The approach is motivated by a couple of claims recently put forward by specific strands of virtue epistemology (VE):
That “knowledge needn’t manifest virtue but instead needs only to arise from the sort of motivated inquiry that a virtuous person would engage in.”
“[T]hat VE should focus less on achieving virtue and more on avoiding vice.” (Turri, Alfano, and Greco 2018, §9)
We argue that the strategy is quite flexible and thus, powerful. Since the sets of criteria that each filter is built on are customizable, and both filters are largely independent of each other, the general account can be fine tuned to give rise to a very wide range of positions regarding what, if anything, counts as biased but useful information. Furthermore, we think that this generality will allow for a large number of applications.
After spelling out the details of the general strategy, we focus on its applications in the problem of relating the history and the philosophy of science. In particular, we use it to point out that the difference between top-down and bottom-up approaches to the integration of history and philosophy of science is not that the former are bias-ridden while the latter are not. Rather, the appeal of bottom-up approaches comes down to the fact that they implicitly include strategies for either minimizing bias, or making sure that whatever biased information goes through is there because it is also virtuous in some way. This immediately raises the question of whether a strategy like ours can even the field out by providing the tools that proponents of top-down approaches can use in order to minimize the risk of letting dangerously biased information through.
Turri, John, Mark Alfano, and John Greco. 2018. “Virtue Epistemology.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Summer 2018. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/sum2018/entries/epistemology-virtue/.
Mark Fischer (Ruprecht-Karls-University Heidelberg, Germany)
Pluralism and relativism from the perspective of significance in epistemic practice
ABSTRACT. My paper examines a recent concept of scientific pluralism introduced by Hasok Chang (2012). I focus on Chang’s (2015) response to the relativist critique based on Sociology of Scientific Knowledge (SSK) by Martin Kusch (2015). Furthermore, I discuss the separation of pluralism from relativism in general. My argument is that both positions have major deficits on a social-practical level. Therefore, I suggest improvements to Chang’s pluralism, which might also be of interest to relativists.
Kusch’s main argument (2015) against Chang (2012) is that the chemical revolution happened because there was no acceptable reason for the scientific group to consider phlogiston theory. He elaborates that there has never been a coherent phlogiston theory itself. In contrast, there have been good social reasons to accept oxygen theory. Chang does not agree. According to him, we should not seek “literal truth” (Chang 2012, p.219) in form of correspondence theory, but other possible ways “to maximize our learning from reality” (Chang 2012, p. 220). What distinguishes Chang from relativism is his way of defending pluralism as an important ideal of scientific research necessary to understand additional aspects of reality. His “active scientific realism” disagrees with the concept of theory unification. Chang argues for the coexistence of competing but somehow practical successful theories instead. Relativism based on SSK, in contrast, supports “good social reasons” as an acceptable argument for unification (Kusch 2015, p.76, 78).
From my point of view, Chang is correct if he argues against unification based on social normativity. (See for a parallel discussion on moral relativism by David Velleman (2015) also.) A relativist theory, which does not accept pluralism as a general aspect of scientific research, would not differ from scientific realism, as the kind of critical rationalism, on a practical level. The debate between realism and anti-realism would be empty if realism as well as relativism shared the same concept of theory unification. In contrast, Chang’s version of pluralism obviously offers practical impact. Unfortunately, its position about social standards of research is unsatisfactory. Furthermore, I’m not convinced by Chang’s (2018) newer claim of metaphysical pluralism as a concept which is coherent to realism.
Therefore, I suggest a more confident form of relativism, inspired by Nelson Goodman (1995 [1978]), which includes pluralism. I am convinced, that there is good reason to argue that nature as well as possibilities to interpret it are quite complex but constrained. However, from my perspective social-normative considerations play a major part in epistemology. As a result, pluralism must emphasize e.g. socially inducted aims of research as a reason for theory pluralism. I agree, that constrains of what we call nature influence the possible variability of theories. However, it does not mean that pluralism cannot constitute a convincing answer to the complexity of different social communities and their way of epistemology.
Bibliography
Chang, Hasok
(2012): Is Water H2O? Evidence realism and pluralism. Dordrecht, Heidelberg: Springer (Boston Studies in the Philosophy of Science, Vol. 293).
(2015:) The Chemical Revolution revisited. In: Studies in History and Philosophy of Science Part A 49 (Supplement C), S. 91–98. DOI: 10.1016/j.shpsa.2014.11.002 [In Citavi anzeigen] .
(2018): Is Pluralism Compatible With Scientific Realism? In: Juha Saatsi (Hg.): The Routledge Handbook of Scientific Realism. Abingdon: Routledge, S. 176–186.
Kusch, Martin (2015): Scientific pluralism and the Chemical Revolution. In: Studies in History and Philosophy of Science Part A 49 (Supplement C), S. 69–79. DOI: 10.1016/j.shpsa.2014.10.001 [In Citavi anzeigen] .
Goodman, Nelson ([1978] 1995): Ways of worldmaking. 7. print. Indianapolis, Ind.: Hackett.
Velleman, J. David (2015): Foundations for Moral Relativism. Second Expanded Edition. 2nd ed. Cambridge: Open Book Publishers. Online verfügbar unter http://gbv.eblib.com/patron/FullRecord.aspx?p=4340047.
The pluralist chemistry and the constructionist philosophy of science
ABSTRACT. Recent philosophy of chemistry promotes the view that chemistry is inherently pluralist (e.g. van Brakel, Chang, Lombardi, Ruthenberg, Schummer), the pluralism in it being due, firstly, to its character as a science producing a multitude of new entities and thence new constellations in the world; and secondly, the factual plurality of aims of chemical inquiry and thence of research methods. I put this pluralism into my acquired philosophical background – the constructive realist account of science. Here I focus primarily on Rein Vihalemm’s methodological pluralism and practical realism, and Ronald Giere’s perspectival pluralism and model-based account of science. According to Vihalemm, chemistry has a dual methodological nature: it is at once an exact science and a natural science, the latter aspect related to the “stuffness” of its research object – the substances. His practical realist insistence converges with Giere’s perspectival pluralism which leans mainly on the plurality of scientific apparatus and their interactions with the research object, rendering varying representations of it. In chemistry, one of the essential research apparatuses is the substances themselves, which are at the same time the object of research and hence the part of the world modelled. Substances have (potentially) a plurality of (types of) interactions among them, with the other laboratory apparatuses and methods, and with the rest of the world. Those engender various aims of modelling, and necessitate different research methods. This plurality serves not only pragmatic, but also epistemic aims, hence we should embrace a pluralist stance toward chemistry.
References
Chang, H. (2012). Is Water H2O? Evidence, Pluralism and Realism. Boston Studies in the Philosophy of Science. Dordrecht: Springer
Giere, R. (2004). How models are used to represent reality. Philosophy of Science 71(5), 742-752
Giere, R. (2006a). Scientific Perspectivism. Chicago and London: University of Chicago Press
Giere, R. (2006b). Perspectival pluralism. Minnesota Studies in the Philosophy of Science, Vol. XIX. Eds. Stephen H. Kellert, Helen E. Longino, C. Kenneth Waters, 26-41, Minneapolis: University of Minnesota Press
Lombardi, O. & Labarca, M. (2005). “The ontological autonomy of the chemical world”, Foundations of Chemistry, 7: 125-148.
Ruthenberg, K. (2016) Matter and Stuff – Two sides of the same medal? , In: A. Le Moli and A. Cicatello (Eds.), Understanding Matter Vol. 2. Contemporary Lines, New Digital Press, Palermo, 153-168.
Ruthenberg, K. Martinez González, J. C. (2017) Electronegativity and its multiple faces: persistence and measurement, Foundations of Chemistry 19: 61-75
Schummer, J. (2015) The Methodological Pluralism of Chemistry and Its Philosophical Implications, in: E. Scerri & L. McIntyre (eds.), Philosophy of Chemistry – Growth of a New Discipline, 57-72. Dordrecht etc.: Springer
Vihalemm, R. (2011). A monistic or a pluralistic view of science: Why bother? In: P. Stekeler-Weithofer, H. Kaden, N. Psarros (Ed.). An den Grenzen der Wissenschaft. Die “Annalen der Naturphilosophie” und das natur- und kulturphilosophische Programm ihrer Herausgeber Wilhelm Ostwald und Rudolf Goldscheid (79−93). Stuttgart/Leipzig: Sächsische Akademie der Wissenschaften zu Leipzig.
Vihalemm, R. (2016). Chemistry and the problem of pluralism in science: an analysis concerning philosophical and scientific disagreements. Foundations of Chemistry 18, 91-102
ABSTRACT. Gaisi Takeuti occasionally talked about the concept of set. Based on some remarks from him on it, we can see that Takeuti had a highly original view of it. However, Takeuti gave only scattered remarks on it and no sufficiently systematic development of his own view of it in its entirety. In this talk, we try to put together Takeuti's remarks on the concept of set, most of which are currently available only in Japanese ([1], [2], [3], [4], etc.), and to reconstruct Takeuti's view of the concept in a manner as systematic as possible.
Takeuti's view of the concept of set can be summarized by fundamental points. First, one of such points is concerned with the relationship between logic and the concept. Takeuti seems to have had a somewhat uncommon view of how logic is related to it. Although most set-theorists seem to consider separately (first-order) logic and the concept, Takeuti thought that a proof-theoretic analysis of higher-order predicate calculus, following the Hilbert-Gentzen formalist method, can provide an illumination of the nature of the concept of set.
Secondly, Takeuti suggested that the concept of set to be analyzed in this way is a sort of reification of the concept of proposition and claimed that such a concept of set is different from the one given in an axiomatic set theory, e.g., ZF, but it is closer to the concept of set introduced by the comprehension principle in higher- order predicate calculus. Although Takeuti had a view that ZF is very important from a technical viewpoint and did a lot of work related to ZF, apparently Takeuti thought that ZF is of limited foundational or philosophical interest, since it does not provide a desired intuition of the concept of set.
Thirdly, in some of his last works (e.g., [6]), Takeuti revisited the issue of set-theoretic paradoxes and expressed the view that set-theoretic paradoxes show that the universe of set is a growing universe, which looks similar to Michael Dummett's concept of the indefinitely extensible.
References
[1] G. Takeuti. From the standpoint of formalism I (in Japanese). Kagaku-kisoron Kenkyu (Studies of Philosophy of Science), 2(2):25–29, 1956.
[2] G. Takeuti. From the standpoint of formalism II (in Japanese). Kagaku-kisoron Kenkyu (Studies of Philosophy of Science), 2(2):15–19, 1956.
[3] G. Takeuti. From the standpoint of formalism III (in Japanese). Kagaku-kisoron Kenkyu (Studies of Philosophy of Science), 2(2):24–27, 1956.
[4] G. Takeuti. On set theory (in Japanese). Kagaku-kisoron Kenkyu (Studies of Philosophy of Science), 5(3):1–7, 1963.
[5] G. Takeuti. Proof Theory, second ed. Studies in logic and the foundations of mathematics. North-Holland, 1987.
[6] G. Takeuti. A retrospection and overview of mathematics (in Japanese).
Kagaku-kisoron Kenkyu (Studies of Philosophy of Science), 28(2):55–58, 2001.
Combining Temporal and Epistemic Logic: A matter of points of view
ABSTRACT. In Temporal-Epistemic Logic (TEL) we need to evaluate sentences regarding both times and epistemic states (or possible worlds). For example, a sentence such as Mary believes/knows that Paul will come back tomorrow must be evaluated regarding a certain state (possible world) w and a certain time point t (now/today). According to the (modal) logical analysis of propositional attitudes, if the sentence above is true, then for any state v compatible with what Mary believes/knows at w, there is a time s (“tomorrow”, later than t) such that, in v at s, Paul comes back.
This kind of analysis takes into account both epistemic states and times. So, we need to combine a Modal Epistemic Logic and a Temporal Logic into a more comprehensive logical system, which will precisely enable us to evaluate sentences that contain a temporal and an epistemic dimension, such as the examples above.
When we want to deal with a knowledge representation problem, it may be useful to consider the dynamic aspects of reasoning processes, that is, how knowledge changes over time as well as the different kinds of knowledge an agent can have about past and future. In TEL the first part of that challenge can be achieved in a relatively easy way by adding a temporal dimension to an epistemic system, as it happens in the sentence: Tomorrow Mary will believe that Paul came back (now/today/yesterday).
However, the second part of our goal is much more difficult to achieve, since it implies letting temporal operators occurring in the scope of epistemic ones, as we can see in the following example: Mary believes that Paul will come back tomorrow.
The most notable aspect of TEL systems is that we have to combine two different points of view in them, so that, on one hand, temporal points (instants) are determined from the point of view of an observer placed outside the world, and, on the other hand, the epistemic alternatives of each agent (in each instant) are relative to that agent.
The main goal of this talk is to show that the most important problems we have to face when we try to build a TEL system, can be overcome with the help of so-called Hybrid Logics, as hybrid formal languages include an interesting set of syntactic mechanisms (such as nominals, the satisfaction operator and binders) which enable us to refer to specific time instants, so that we´ll be able to formally express how knowledge (or beliefs) change over time and what kind of knowledge (or beliefs) an agent can hold about past and future.
References
Blackburn, P. (1994): “Tense, Temporal Reference and Tense Logic”, Journal of Semantics, vol. 11, pp. 83-101.
Engelfriet, J. (1996): “Minimal Temporal Epistemic Logic”, Notre Dame Journal of Formal Logic, vol. 37, nº. 2, Spring 1996, pp. 233-259.
Lenzen, W. (2004): “Epistemic Logic”, en Niiniluoto, I., Sintonen, M. And Wolenski, J. (eds.), Handbook of Epistemology, Kluwer Academic Publishers, pp. 963-983.
11:30
Ayse Sena Bozdag (Munich Center for Mathematical Philosophy (Ludwig Maximilian University of Munich), Germany)
Modeling Belief Base Dynamics Using HYPE Semantics
ABSTRACT. Mainstream approaches to dynamic doxastic logics modeled within the DEL paradigm use possible worlds semantic. They approach belief modality as a diamond-like modal operator over epistemic alternatives. However, application of such semantics in doxastic or epistemic contexts results in highly idealized epistemic agents. To address some of these concerns about dynamic doxastic frameworks, I present a new framework with a focus on information states as the basis of beliefs, both at the static level of belief formation and at the level of belief dynamics. I use an extended version of the HYPE model [1], with a preference ordering on the subsets of the situations space, and a binary belief relation between situations, to support a static belief modality and the dynamic belief change operators. On the static aspect, the pieces of information are denoted by propositions supported at situations, and the model explicitly represents possibly inconsistent and incomplete collections of information using sets of situations. The static belief operator is a non-monotonic and paraconsistent modality, which leads to a consistent belief set at any situation for any collection of information. On the dynamic aspect, I present dynamic operators for belief revision and belief contraction, with their duals. The dynamic operators lead to new models, by set theoretic expansion of the total information of the agent with the trigger information and reordering of preferences for successful revision and contraction. The dynamic operators may lead to a number of new models, hence the duals for each operator. By modeling the changes of belief as consequences of the changes of the information of agents, the model meets expectations of belief change such as withdrawal of the beliefs that are merely inferred from an old belief, as result of revision or contraction. The dynamic belief base model shares some motivations and aspects with evidence models [2], epistemic models for sceptical agents[3] and theories of belief base[4].
References
[1] Leitgeb, Hannes. (2017): HYPE: A System of Hyperintensional Logic.
[2] van Benthem, Johan and Fernandez-Duque, David and Pacuit, Eric (2014): “Evidence and
plausibility in neighborhood structures”. Annals of Pure and Applied Logic. 165 (1), 106-133.
[3] B ́ılkov ́a, Marta and Majer, Ondrej and Peliˇs, Michal (2015): “Epistemic logics for sceptical
agents”. Journal of Logic and Computation 26 (6), 1815-1841.
[4] Rott, Hans (2001): Change, choice and inference: A study of belief revision and nonmono-
tonic reasoning. (No. 42) Oxford University Press.
12:00
Libor Behounek (Institute for Research and Applications of Fuzzy Modeling, University of Ostrava, NSC IT4Innovations, Czechia)
A formalism for resource-sensitive epistemic logic
ABSTRACT. Standard epistemic logic notoriously suffers from the logical omniscience paradox, associated with overidealized deductive powers of agents. A possible solution to the paradox, based on resource-aware reasoning modeled in semilinear contraction-free substructural logics (better known as fuzzy logics), has been informally sketched by Behounek (2013). A recent formalization of fuzzy intensional semantics (Behounek & Majer, 2018) makes it possible to elaborate the proposal in detail.
The proposed solution starts with distinguishing the actual, potential, and feasible (or feasibly derivable) knowledge of an agent. Logical omniscience is only troublesome for the feasible knowledge, as the potential knowledge does include all logical truths and the actual knowledge is not closed under logical deduction. Moreover, feasible knowledge is apparently a gradual notion, as longer derivations require more resources (such as time, memory, etc.), and so are less feasible than shorter ones. The gradation of feasible knowledge can conveniently be represented by means of formal fuzzy logic (Cintula, Hajek, & Noguera, 2011, 2015), whose truth values are most often interpreted in terms of gradual truth (Hajek, 1998). The fact that most fuzzy logics belong to contraction-free substructural logics makes them particularly suitable for modeling resource-awareness, and feasibility in general, since the fusion of resources can be represented by non-idempotent conjunction (Behounek, 2009). The suitability of a particular fuzzy logic for resource-aware reasoning is determined by the intended way of combining the resources - e.g., Goedel logic for maxitive resources such as erasable memory, product or Lukasiewicz logic for bounded or unbounded additive resources such as computation time, etc.
In the proposed formalism, the feasible knowledge of an agent is represented by a unary modality K in the framework of a suitable propositional fuzzy logic. The truth degree of the feasible knowledge KA of a given proposition A then represents the feasibility of deriving A by the agent from the actual knowledge. The modal axioms of standard propositional epistemic logic that express the agent’s inference abilities are modified to reflect the costs of derivation: e.g., the axiom (K) of logical rationality contains an additional propositional constant c(MP) expressing the cost of applying the rule of modus ponens by the agent. The sub-idempotence of conjunction then decreases the lower bound for the truth degree of KA with each step in the derivation of A by the agent. This resolves the paradox of logical omniscience, since propositions that require long derivations are only guaranteed to have a very small (or even zero) truth degree of feasible knowability.
The apparatus of fuzzy intensional semantics (Behounek & Majer, 2018) facilitates a smooth formalization of the described resource-sensitive epistemic logic, by means of a faithful translation into Henkin-style higher-order fuzzy logic. The translation provides a syntactic method for proving tautologicity in the epistemic logic by deriving the translated formula in first-order fuzzy logic (cf. Hajek, 1998, ch. 8). The resulting formalism admits considering multiple epistemic agents, nesting the epistemic modalities, and freely combining factual and epistemic subformulae. The agents‘ actual and potential knowledge can be represented in the framework as well, by using thresholds on the feasibility degrees. Additionally, the use of fuzzy logic automatically accommodates gradual propositions in the formalism.
The talk will present the details of the proposed apparatus, including a selection of its metatheoretical properties; discuss the plausibility of its assumptions and resource-sensitive modifications of epistemic principles; and examine its relation to propositional dynamic logic and other related approaches to epistemic logic.
References:
Behounek L. (2009): Fuzzy logics interpreted as logics of resources. In M. Pelis (ed.): The Logica Yearbook 2008, pp. 9-21. College Publications.
Behounek L. (2013): Feasibility as a gradual notion. In A. Voronkov et al. (eds.): LPAR-17-short. EasyChair Proceedings in Computing 13: 15-19.
Behounek L., Majer O. (2018): Fuzzy intensional semantics. Journal of Applied Non-Classical Logic 28: 348-388.
Cintula P., Hajek P., Noguera C. (2011, 2015), eds.: Handbook of Mathematical Fuzzy Logic, vol. I-III. College Publications.
Hajek P. (1998): Metamathematics of Fuzzy Logic. Kluwer.
ABSTRACT. Any attempt of giving a formal representation of truth is challenged by sentences like the Liar (“What I am saying is not truth”), and the Truth-teller (“What I am saying is true”). Both sentences are felt to be problematic, but in different ways. While the Liar looks paradoxical, meaning that either assuming it to be true or assuming it to be false seems to contradict our intuitions about truth, the Truth-teller looks arbitrary, in the sense that both assume that it is true and assume that it is false seem to be equally defensible theses, according to our intuitions about truth.
Most formal representations of truth validate sets of sentences which exclude both Liars and Truth-tellers and, more general, aim to validate only sentences which are intuitively definite, in the sense of being neither paradoxical nor arbitrary. Clearly, even though most formal representations agree on assessing the Liar as paradoxical and the Truth-teller as arbitrary, the technical notions intended to capture the intuitive concepts of paradoxical, arbitrary and definite differ from one proposal to another. However, many formal representations can be recast to show a common pattern that we are going to illustrate.
Some formal accounts identify a family of truth assignments which are “foreseeable”, in the sense of respecting some provisos: for instance, to be a fixed-point of some monotone evaluation scheme, or to arise as the set of stabilities of some process of revision, or to be a model of an axiomatic theory. What is common to all these approaches is that, usually, the Liar already is excluded from the domains of the foreseeable truth assignments, while the Truth-teller does belong to the domain of some foreseeable truth assignment and it is only filtered out by applying the following (implicit) meta-norm of “definiteness’’: whenever a formal account of truth chooses a designated truth assignment, this one is the only admissible way of assigning classical truth values to the sentences in its domain.
In my talk I will contrast, both from a theoretical and from a philosophical point of view, three different ways of formalising the above intuitive notion of “definiteness” of a given set of sentences, namely, (1) determinacy (= there exists only one foreseeable truth assignment whose domain is the given set); (2) intrinsicity (= there exists a foreseeable truth assignment for the given set which is compatible with any other foreseeable truth assignment); and (3) definiteness properly named (= there exists only one truth assignment for the given set which is compatible with any other foreseeable truth assignment), and I will investigate their different philosophical motivations and their theoretical pros and cons.
References
Gupta A., Belnap N. (1993), The revision theory of truth, MIT Press, Cambridge MA.
Halbach V. (2011), Axiomatic theories of truth, CUP, Cambridge UK.
Kripke S. (1975), An outline of a theory of truth, Journal of Philosophy 72: 690-716.
Visser A. (1989), Semantics and the Liar paradox. In: Handbook of Philosophical Logic, vol. 4.
11:30
Wen-Fang Wang (Institute of Philosophy of Mind and Cognition at National Yang Ming University, Taiwan)
A Three-Valued Pluralist Solution to the Sorites Paradox
ABSTRACT. Many philosophers believe that the three-valued approach to the sorties paradox is a wrong approach to solve the paradox. There are two main objections to the three-valued approach in the literature. One of them focuses on the fact that the approach is truth-functional and thereby cannot do justice to the phenomenon of penumbral connection. The other says that the three-valued approach falls foul of ‘the problem of higher-order vagueness’ by imposing sharp cut-off points on a sorites series where there should be none. Disagreeing with this popular and negative opinion, the author proposes and endorses a solution that he calls ‘three-valued pluralism’ to the age-old sorites paradox. In essence, it is a three-valued semantics for a first-order language with identity with the additional suggestion that a vague language has more than one correct interpretation. Unlike the traditional three-valued approach to a vague language, so the author argues, the three-valued pluralism can accommodate both the phenomenon of higher-order vagueness and the phenomenon of penumbral connection when equipped with ‘suitable conditionals’. Specifically, the author shows how to define a definite operator within this pluralist framework to reply to Williamson’s objection to the three-valued approach in (2004) and (2007), and shows how to define a conditional operator within this framework to accommodate the phenomenon of penumbral connection. The author also shows that the three-valued pluralism is a natural consequence of a restricted form of the Tolerance Principle ((RTP): If it is correct for a subject S to classify x as a member of F+ (or F−( on an occasion o and y and x are ‘very similar’ in F-relevant respects for S on o, then it is also correct for S to classify y as a member of F+ (or F−) on o, so long as, after it is so classified, the overall difference in F-relevant respects between any member of F+ and any member of F− remains salient for S on o.) and a few related ideas, and argues that (RTP) is well-motivated by considerations about how we learn, teach, and use vague predicates. Finally, the author compares his proposal with Raffman’s recent proposal in (2014).
References:
Fine, K. (1975). Vagueness, truth and logic. Synthese, 30, 265–300.
Goguen, J. A. (1969). The logic of inexact concepts. Synthese, 19, 325–73.
Halldén, S. (1949). The logic of nonsense. Uppsala: Uppsala Universitets Arsskrift.
Raffman, D. (2014). Unruly words—a study of vague language. Oxford: Oxford University Press.
Wang,W. F. (2016). Three-valued plurivaluationism: A reply to Williamson’s criticisms on the three-valued approach to the sorites Paradox. The Philosophical Forum, XLVII(3–4), 341–360.
Wang, Wen-fang, “Three-Valued Semantic Pluralism: A Defense of A Three-Valued Solution to the Sorites Paradox”, Synthese (A&HCI), Volume 195, Issue 10, October 2018, p. 4441-4476.
Williamson, T. (2004). Past the linguistic turn? In B. Leiter (Ed.), The future for philosophy. Oxford:
Clarendon Press.
Williamson, T. (2007). The philosophy of philosophy. Oxford: Blackwell Publishing.
ABSTRACT. The Ross paradox and the paradox of free choice permission pose problems that have troubled deontic logic for decades and though they were many times pronounced to be solved they still keep returning as ‘alive and kicking’. Both the paradoxes center around the ambiguity which is characteristic of the use of the word “or” in normative/deontic discourse, or, from another viewpoint, on the ambiguity of the concept of disjunction in the related philosophical frameworks.
In my paper, I will first analyze the background of the paradoxes and try to remove some confusions that can be found in the relevant literature. I will argue that if we want to get a grasp on the relevant issues we have to make clear what we aspire to achieve when we strive to build a system of deontic logic. I will then argue that the paradoxes can be solved (or dissolved) removed if we approach the problems from the perspective of a model language game proposed by David Lewis (Lewis 1979a, Lewis 1979b).
Lewis’ language game involves three players: the Master, the Slave, and the Kibitzer. The Master’s moves consist in issuing commands and permissions to the Slave, whose moves consist in making what the Master requires. The Kibitzer’s moves are his descriptions of the normative situation. Situations (or possible worlds) conforming to the Master’s commands and permissions together create the Sphere of Permissibility. At the start of the game, the sphere of permissibility does not differ from the Sphere of Accessibility, i.e. the space of all possible situations (worlds) that come into consideration as alternatives to the actual world of the language game.
To manifest that that the languages of the players are different, I will suppose that the Master only uses sentences in the imperative mood and permissive sentences that (typically) employ the phrase “you may...”. The Kibitzer, on the other hand, has in his repertoire only sentences describing the normative situation, i.e. statements to the effect what the Slave is obliged (must), is forbidden (must not) or is allowed (may) to do.
I will argue that thought both the language of the Master and the language of the Kibitzer are governed by certain logical rules we shouldn’t suppose that the rules are identical or parallel. The main moral of the paper is that deontic logic should not be seen as a homogenous discipline but as a complex of complementary logical systems that focus on different linguistic discourses and have different aspirations.
SELECTED REFERENCES
Gabbay, D. et. al (Eds). (2013): Handbook of deontic logic and normative systems. London: College Publications.
Hansen, J. (2006): “The Paradoxes of Deontic Logic: Alive and Kicking”, Theoria 72, 221-232.
Lewis, D. (1979): “Scorekeeping in a Language Game”, Journal of Philosophical Logic 8, 339-359.
Makinson, D. (1999): “On a Fundamental Problem of Deontic Logic“, in: McNamara- Prakken (1999), 29-53.
This symposium examines the evidential relations between history and philosophy from various angles. Can the history of science show evidential support and falsifications for the philosophical theories about science? Or is it always a case of stalemate in which each reconstruction of history is only one possible reconstruction amongst several others? One suggestion has naturally been that the whole approach aimed at testing and comparing alternative philosophical models by recourse to historical data is misguided at worst, or in need of serious reformulation at best.
The tradition that looms large over this discussion is the attempt to turn philosophy of science into an empirically testable discipline. History and philosophy of science is then understood as a science of science in a close analogy to the natural sciences. One view is that philosophers provide theories to test and historians produce data by which these theories are tested. The most vocal and well-known representative of this approach is the VPI (Virginia Polytechnic Institute) project. The two most notable publications of this endeavour are “Scientific Change: Philosophical Models and Historical Research” and Scrutinizing Science: Empirical Studies of Scientific Change. A conference organised in 1986 preceded the latter publication. The key idea is testability; that historical case studies perform the role of empirical validation or falsification of the philosophical models of science. In this way, case studies were meant to provide ‘a reality check for philosophy of science.’
It is the role and status of case studies, and the rationale using case studies, that is brought back to the table and in the locus of this symposium. More generally, the authors are probing the appropriate evidential relationship between history and philosophy. The symposium makes evident a new sticking point in the debate regarding the empirical accountability of philosophical theories: Should very recent science rather than the history of science function as a source of empirical information? Or should we rather focus on finding more sophisticated evidential modes for the history of science?
Raphael Scholl (Department of History and Philosophy of Science, University of Cambridge, UK)
Scenes from a Marriage: On the confrontation model of history and philosophy of science
ABSTRACT. According to the "confrontation model", integrated history and philosophy of science operates like an empirical science. It tests philosophical accounts of science against historical case studies much like other sciences test theory against data. However, the confrontation model's critics object that historical facts can neither support generalizations nor genuinely test philosophical theories. Here I argue that most of the model's defects trace to its usual framing in terms of two problematic accounts of empirical inference: the hypothetico-deductive method and enumerative induction. This framing can be taken to suggest an unprofitable one-off confrontation between particular historical facts and general philosophical theories. I outline more recent accounts of empirical inquiry, which describe an iterative back-and-forth movement between concrete (rather than particular) empirical exemplars to their abstract (rather than general) descriptions. Reframed along similar lines, the confrontation model continues to offer both conceptual insight and practical guidance for a naturalized philosophy of science.
ABSTRACT. In this paper, we tackle the contribution that history of science can make to the problem of rule-choice, i.e., the choice from among competing methodological rules. Taking our cue from Larry Laudan’s writings, we extensively discuss what we call historicist naturalism, i.e., the view that history of science plays a pivotal role in the justification of rules, since it is one source of the evidence required to settle methodological controversies. As we illustrate, there are cases of rule-choice that depend on conceptual considerations alone, and in which history of science does not factor. Moreover, there are cases in which methodological change is prompted – and explained – by empirical information that is not historical in nature: as suggested by what we call scientific naturalism, the justification of methodological choices comes from our knowledge of the structure of the world, as expressed by our currently accepted scientific theories. As we argue, due to its backward-looking character, historicist naturalism does not satisfactorily deal with the case of newly introduced rules, for which no evidence concerning their past performance is available. In sum, we conclude, the contribution that history of science can make to rule-choice is more modest than Laudan suggests.
The study of Climate Change as a philosophic subject was until recent times at very early stages (Winsberg 2018). The first entry related to ‘Climate Science’ in the Stanford Encyclopedia of Philosophy appear as late as 2018 (Parker, 2018). This is more awkward if we recall several of the main issues related to Climate Change and the scientific practice associated: epistemic trust, models, risk, uncertainty, probability, values, data, instruments, and complexity among many others.
Also, the bridge between research on Climate Change and policy and social spheres create problems that are not settled such as the epistemic trust or, in some other communities the relation between science and non-science.
At the same time, the development of the philosophical study of Climate Change can convey new educational insights to teach a diversity of philosophical topics. This is particularly relevant to the philosopher of science engaged in ‘social relevant philosophy’ but also to all the other philosophers of science.
This Symposium aims to bring together philosophers of science prone to shed light upon the above issues and correlated ones.
References:
Winsberg, Eric (2018) Philosophy and Climate Change. Cambridge, MA; Cambridge University Press
Katzav, Joel & Parker, Wendy S. (2018). Issues in the Theoretical Foundations of Climate Science. Studies in History and Philosophy of Modern Physics 63: 141-149.
Parker, Wendy, "Climate Science", The Stanford Encyclopedia of Philosophy (Summer 2018 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2018/entries/climate-science/>.
History and Epistemology of Climate Model Intercomparison Projects
ABSTRACT. Having started almost 30 years ago with the Atmospheric Model Intercomparison Project, climate model intercomparison projects are now at the core of climate research. In particular, the Coupled Model Intercomparison Project (CMIP) has become “one of the foundational elements of climate science” (Eyring 2016). Since its creation in the mid-1990s, it has evolved over five phases, involving all major climate modeling groups in the world. In addition with their role for climate research, these phases have hold a central place in international assessments of climate change - in IPCC reports in particular - providing therefore guidance to decision makers.
With more and more people concerned by the results of its experiments, CMIP has been put at the center of contradictory interests (Taylor 2012). In particular, historically, CMIP has had to combine a role for climate research - to help scientists understand climate - and for society - to assess the state of human knowledge about climate and climate change. Yet, the ability of CMIP to play these two roles led to some debates: both the heuristic value of intercomparison projects and the interpretation of assessments from multi-model ensembles for policy guidance have been questioned by climate scientists and philosophers of science (e.g. Knutti 2010, Lenhard and Winsberg 2010, Parker 2011, Winsberg 2018). Taking into account these debates, we will try to show how intercomparison projects can still be useful both for climate research and for policy guidance.
It is often considered that intercomparison projects help climate modeling research, because (1) the comparison of simulation outputs helps to diagnose the causes of differences between models. (2) the agreement between models makes it possible to confirm robust results. However, these two views have been challenged: (1) has been challenged by Lenhard and Winsberg (2010) because, according to them, complex numerical climate models are facing a strong form of confirmation holism. This holism limits significantly the analytical understanding of climate models, thus makes difficult the interpretation of differences between simulations in climate model intercomparison. (2) has been challenged by Parker (2011) and Winsberg (2018), who have shown that in climate multi-model ensembles, the fact that robustness increases confidence is arguable.
In front of that, we will claim that intercomparison projects nevertheless help climate modeling research mainly because (3) they act as an infrastructure, which helps organize and coordinate climate modeling activity, therefore making climate modeling research more efficient.
In a second time, we will clarify the role of intercomparison projects for decision makers, by proposing a new interpretation of climate multi-model ensembles when they are used for policy guidance. For that, we will first give criteria to determine how a single simulation can be best useful for decision makers. It will lead us to define a numerical model as a personal tool, expressing the subjective but rational voice of the climate modeler(s). With this definition, we will then interpret intercomparison protocols as scientific polls, which serve mainly a synthesis role.
References:
Eyring, V. et al., 2016. “Overview of the Coupled Model Intercomparison Project phase 6 (CMIP6) experimental design and organization”, Geoscientific Model Development 9(5), 1937-1958.
Knutti, R., 2010. “Challenges in combining projections from multiple climate models”, Journal of Climate 23 (10), 2739-2758.
Lenhard, J., Winsberg, E., 2010. “Holism, entrenchment, and the future of climate model pluralism”, Studies in History and Philosophy of Science Part B 41 (3), 253-262.
Parker, W. S., 2011. “When climate models agree: the significance of robust model predictions”, Philosophy of Science 78 (4), 579-600.
Taylor, K. E. et al., 2012. “An overview of CMIP5 and the experiment design”, Bulletin of the American Meteorological Society 93 (4), 485-498.
Winsberg, E., 2018. “What does robustness teach us in climate science: a re-appraisal”, Synthese.
Fragmented authoritarian environmentalism, nationalism, ecological civilisation and climate change in China
ABSTRACT. The complexity and urgency embedded within climate change are appealing for more deliberations in different social, political and philosophical contexts. This paper is going to reflect on how climate change has been constructed under and also shape, what I framed as, fragmented authoritarian environmentalism in China, to argue that more changing social reality need to be included in the discussion of the philosophy of climate change, and to broaden the discussion of green modernity based on political and communication strategy.
In this paper, I demonstrate there has appeared a clearer form of fragmented authoritarianism in environment, rather than a non-participatory totalitarianism in China, through its governance of climate change. The nationalism supported and framed by the central stated in regards to climate change has intensified the ideology of authoritarianism, while the initiative notion of ecological civilisation has reinforced the national interest and at the meantime advanced it beyond, promoting cosmopolitan responsibility for everyone,and therefore offer more spaces for different actors and interactions in the whole regime.
Nationalism, or more accurately the aggrieved nationalism, is the main mental root that constructs the Chinese scepticism of climate change. Though be subdued and not as the same scale as in some Western countries, Chinese scepticism of climate change exists, mostly regards climate change as a plot made by the developed world, and has been strongly influenced by the autonomous central authority with its “official” stances and attitudes. On one hand, this government-leading conspiracy theory could be seen as identity politics which simulates and consolidates the social cohesion and trust of the central political authority; on the other hand, it has managed to guide the public opinion to steer by the epistemic distrust to scientific knowledge of climate change which is often seen in troubled international negotiations and interest conflictions. Also, as the scepticism has been largely disappeared from the scene along with China’s stance on the international arena, and the national objective in building an ecological civilisation which includes many human-environment issues has been raised up. This nationalism based political and discourse strategy is shifting people's perception of distributive justice and restitution of climate change from national dignity, to national soft power and cosmopolitan responsibility. Critically, climate change, along with many other environmental issues, has lowered the entry for more actors beyond the government to influence policy making and implementing, and thus facilities a more fragmented authoritarianism in environment-society relationship. This fragmented authoritarianism framework would supply the space and power for stabilization and change in Chinese context, but also encounter its intrinsic contradiction. In order to examine the prerequisite for building the cosmopolitan solidarity across individual and social boundaries on a global scale to tackle climate challenge, this paper will also make effort in discussing the vulnerability, inequality and maladaptation by looking at cases referring to climate change policy implementation within the political philosophy background of China.
Gisele Secco (Universidade Federal de Santa Maria, Brazil)
The interaction between diagrams and computers in the first proof of the Four-Color Theorem
ABSTRACT. The use of diagrams and the use of computers are two significant themes within the philosophy of mathematical practice. Although case studies concerning the former are abundant – from the notorious case of Euclidean geometry to the uses of diagrams within arithmetic, analysis, topology, knot theory, and even Frege's Begriffschrift –, the latter has received less attention in the field.
I show in my talk how the two themes can be investigated simultaneously via an analysis of the famous case of the Four-Color Theorem (4CT). Whenever the use of computers in mathematical practice is considered, the computer-assisted proof of the 4CT is mentioned. Philosophical discussion of the proof has centered mostly on Tymoczko’s argument for the introduction of experimentation in mathematics via 4CT – notably made in (Tymoczko 1979). (See Johannsen & Misfeldt 2016 for a recent version of this position.)
In previous work, I revised central leitmotifs in rejoinders presented against Tymoczko’s claims, arguing from a Wittgensteinian perspective that the 4CT is relevant to contemporary discussions on the use of computers in mathematics (especially in Author 2017). Aiming a discussion about the criteria for the identity of computer-assisted proofs through an examination of the various proofs of the 4CT, in my talk, I will show the main lines of articulation between the more than 3000 diagrams and the computational machinery mobilized in the construction and the verification of Appel and Haken first version of the proof.
After presenting the way diagrams and computers participate in the proof, dealing with the passage from topology to combinatorics operated in it, my primary strategy consists in projecting the methodological contribution recently suggested by De Toffoli – namely, the three criteria she proposes as tools for evaluating the effectiveness of mathematical notations (expressiveness, calculability, and transparency; cf. De Toffoli 2017) – into the case of Appel and Haken's proof of the 4CT. In so doing, I will specify the ways in which the diagrams of this case study can be considered a perspicuous mathematical notation, as well as to propose some questions regarding the way this notation is related to the computational devices indispensable to the proof.
14:30
John Mumma (California State University San Bernardino, United States)
The Computational Effectiveness of Geometric Diagrams
ABSTRACT. Mathematical proofs are often presented with notation designed to express information about concepts the proofs are about. How is such mathematical notation to be understood, philosophically, in relation to the proofs they are used in?
A useful distinction in approaching the question is that between informational and computational equivalence, formulated in (Larkin and Simon, 1987). Informational equivalence holds between two representations if and only if "all of the information in the one is also inferable from the other, and vice versa." (ibid., p. 67) Computational equivalence holds between two representations if and only if "they are informationally equivalent and, in addition, any inference that can be drawn easily and quickly from the information given explicitly in the one can also be drawn easily and quickly from the information given explicitly in the other, and vice versa.'' (Ibid., p. 67). Applying this to the case of mathematical proofs, we can take informational equivalence to be determined by the content of proofs as revealed by a logical analysis, and computational equivalence to be determined by the notations used to express the content of proofs.
In what is perhaps the standard view, we need only consider mathematical proofs modulo informational equivalence in developing a philosophical account of them. The capacity of a notation to present a proof more clearly and effectively is a pragmatic matter, something perhaps for psychologists to consider, but not philosophers. An alternate view, in line with philosophical work attempting to illuminate mathematical practice, regards distinctions of computational nonequivalence as philosophically significant. On such a view, the structure of mathematical methods and theories is inextricably linked with the way the mind takes in and holds mathematical information.
I aim to make some small steps in elaborating the second view by applying some observations made by Wittgenstein in Remarks on the Foundations of Mathematics to the diagrammatic proofs of elementary geometry. A central challenge in elaborating the view is being precise about what, exactly, the effectiveness of effective mathematical notations amounts to. The notion of informational equivalence has the well developed philosophical resources of logic to support it. No such philosophical resources exist for the notion of computational equivalence. The only option available for investigating it at present would seem to be a bottom-up, case study approach. Accordingly, one looks at cases that dramatically illustrate computationally nonequivalent notations and aims to articulate the features that account for the nonequivalence. Wittgenstein can be understood to be doing exactly this in section III of RFM. The cases that figure in his discussion are proofs of arithmetic identities like 27 +16 =43. He in particular contrasts calculations of such identities using Arabic numerals with derivations of them using a purely logical notation, and identifies features that recommend the former over the latter from the perspective of computational effectiveness. After presenting Wittgenstein's observations, I contrast diagrammatic and purely sentential proofs in elementary geometry and argue that analogous observations distinguish the former as computationally effective.
J. Larkin and H. Simon. Why a diagram is (sometimes) worth 10,000 words. Cognitive Science, 11:65–89, 1987.
ABSTRACT. In the postscript to “A Subjectivist’s Guide to Objective Chance”, Lewis describes a kind of “counterfeit chance”, which, while not genuine chance, is still “suitable for use by determinists” and “will do to serve the conversational needs of determinist gamblers” (Lewis 1986, 121). Recently, philosophers (Glynn 2010; Eagle 2011; Emery 2015; List and Pivato 2015) have urged that such chances are genuine, preferring the less pejorative “deterministic chance” to describe the objective probabilities invoked in a variety of applications, from humdrum coin flips to classical statistical mechanics. Briggs urges that Lewisians should in fact love deterministic chance, and she dismisses as a “non-starter” the complaint that deterministic chance is mere counterfeit chance, resembling real chances while in fact something else (Briggs 2015, 279).
I think Lewisians, and everyone else for that matter, should have a place in their hearts for counterfeit chance. Lewis called it counterfeit because he thought it was not genuine, metaphysical chance. Of course, whether we choose to name these probabilities “chances” is just a matter of convention, but whether we regard them as genuine or merely as a particular form of non-ontic, objective probability is surely a substantive metaphysical issue.
The cases I consider concern chance setups where the underlying physical model is understood to be fully deterministic. Roulette wheels, coin flips, dice rolls, and other games of chance are paradigmatic examples. In such cases, where does the probabilistic element come from? What does it represent? There is, after all, nothing genuinely chancy in nature governing the behavior of these things—we assume they operate deterministically. I claim that the probabilistic element is entirely imposed by us; it represents nothing in the system directly. One chooses, as it were, “random-looking” initial conditions of these systems or “reasonable” probability distributions over them and uses these to predict outcomes.
Allowing this degree of “subjectivity” seemingly threatens the claimed objectivity of ascriptions of probability in these contexts. How does one choose the right probability distribution over initial conditions? I show how such probabilities can be justified in two ways to make them objective probabilities. The first case depends on a particular probability distribution approximating the correct actual (non-probabilistic) distribution of initial conditions (or frequencies). The second case, the method of arbitrary functions, depends on nearly any choice of probability distribution giving the correct results. Both cases demonstrate how probabilities can be applied to describe and explain non-chancy phenomena.
References
Briggs, Rachael. “Why Lewisians Should Love Deterministic Chance.” In A Companion to David Lewis, edited by Barry Loewer, and Jonathan Schaffer, New York: John Wiley & Sons, 2015, 278–294.
Eagle, Antony. “Deterministic Chance.” Nous 45: (2011) 269–299.
Emery, Nina. “Chance, Possibility, and Explanation.” British Journal for the Philosophy of Science 66, 1: (2015) 95–120.
Glynn, Luke. “Deterministic Chance.” British Journal for the Philosophy of Science 61: (2010) 51–80.
ABSTRACT. The purpose of this talk is to defend a pluralism of causality. It will be expected that for natural sciences and humanities different types of causal relations are appropriate. However it can be shown that even inside physics different causal relations are necessary. A model is presented which distinguishes five types of causal relations.
The model distinguishes: Causal factors which are neither necessary nor sufficient; three sufficient causes and a necessary cause. All causal relations are irreflexive, asymmetric or not-symmetric, and the cause is earlier than the effect.
Causal Factors (CF): Amazon has hundreds of tributary rivers. A change of the temperature of one of them is a causal factor for the temperature of Amazon at its fall into the Atlantic. CF is not transitive, continuous or discrete.
Sufficient Cause 1 (CS1): This causal relation is not symmetric and not transitive. Such situations occur when described by the General Gas-Law, i.e. in thermo-dynamical cases on the phenomenological and descriptive macro-level.
Sufficient Cause 2 (CS2): According to Classical Mechanics and Special Relativity the causal relation there is asymmetric, transitive and continuous. This is a second important type of sufficient cause (CS2) which happens in dependencies of events described by dynamical laws.
Sufficient Cause 3 (CS3): In a quantum jump the sufficient cause is not transitive. An energy-input by a photon (A) is capable to change electron’s position to an excited state (B) and event B is capable to emit a photon by jumping back (C). But A is not capable to produce C. Moreover the causal relation is asymmetric.
Necessary Cause (CN): The events of the past light cone (Special Relativity) are necessary causes for those of the future light cone. The earlies members of an ancestry-tree are necessary causes for the later members. CN is asymmetric and transitive. On the metalevel of scientific causal explanation laws of nature (dynamical or statistical) are necessary “causes” for the Explanandum.
Theories of causality having only one type of causal relation (for example CS2) as Dowe (2000) and Salmon (1984, 1994) cannot be accepted as the examples for CF, CS1 and CS3 above show.
The basic logic RMQ (Weingartner 2009) for the causal relations is a 6-valued decidable propositional logic (3 values for truth, 3 values for falsity). It has two concepts of validity, material and strict where all theorems of classical two-valued propositional logic are at least materially valid. The strictly valid theorems of RMQ meet criteria of relevance. RMQ includes a modal system with 14 modalities. The proposed model RMQC is a logic of five causal relations which is an extension of RMQ by adding the five 2-place operations CF, CN, CS1, CS2, CS3 between propositional variables representing states of affairs or events.
References:
1. Dowes P. (2000) Physical Causation. Cambridge: Cambridge U.P.
2. Salmon, W. (1984) Scientific Explanation and the Causal Structure of the World. Princeton: Princeton U.P.
3. Salmon, W. (1994) “Causality without Counterfactuals”, Philosophy of Science 61. 297-312.
4. Weingartner, P. (2009) “Matrix-Based Logic for Application in Physics”, Review of Symbolic Logic 2. 132-163.
5. Weingartner, P. (2006) “The Need of Pluralism of Causality”, Logic and Logical Philosophy 25,4. 461-498.
Formal and Informal Logic in the Lvov-Warsaw School
ABSTRACT. The Lvov-Warsaw School (LWS) was a group of scholars which included philosophers, logicians, psychologists. The School was founded by Kazimierz Twardowski, a student of Franz Brentano and a great teacher of philosophy. All members of the School are direct or indirect students of Twardowski. Two main centers of the School were Lvov and Warsaw and the most fruitful period in the development of the School was the first four decades of the 20th century.
Representatives of the School analyzed similar problems but did not accepted any common set of theses. The most important binder of the school was methodological. Members of the LWS realized the postulates of precision of speech and justification of convictions. They found the way of achieving clarity and justification in the logical tools. However, the conception of logic in the LWS was broad. Twardowski and his students included into its scope not only the theory of inference and formal issues but also informal theories of cognitive operations, the problems of logical semiotics as well as the methodology of sciences and humanities.
The LWS had a strong formal branch the results of which are commonly known. Łukasiewicz’s three-valued logic, metalogic and inquiries in the history of ancient logic, Leśniewski’s systems (Prototetics, Mereology, Ontology), Alfred Tarski’s formal semantics and his definition of “truth” for formal languages are some standard examples of formal achievements of the LWS.
However, partially under the influence of formal research and partially independently from it, the problems contemporarily included into the so-called “informal logic” were analyzed in the LWS. These problems include, among others: the theory of reasoning, including fallible and practical reasoning (Twardowski, Łukasiewicz, Kotarbiński, Czeżowski, Ajdukiewicz,), the theory of argumentation and discussion (Twardowski, Witwicki, Czeżowski), the methodology of sciences, including the problem of induction and discovery/justification distinction (Twardowski, Łukasiewicz, Kotarbiński).
In my paper, I will sketch both formal and informal achievements of the LWS. I will also show the way formal and informal problems penetrated each other in the works of members of the School. Finally, I will demonstrate the contributions of these results to the general history of logic and philosophy.
Brożek Anna, Friedrich Stadler & Jan Woleński (eds.) (2017). The Significance of the Lvov- Warsaw School in European Culture, Vienna Circle Yearbook No. 21. Wien: Springer.
Ajdukiewicz, Kazimierz (1958). The problem of the rationality of fallible methods of inference. [In:] Marian Przełęcki & Wójcicki Ryszard. Twenty-Five Years of Logical Methodology in Poland. Dordrecht & Warszawa 1977: D. Reidel Publishing Company & PWN – Polish Scientific Publishers, pp. 13-29.
Ajdukiewicz, Kazimierz (1965). Pragmatic Logic. Dordecht-Boston & Warsaw 1974: D. Reidel Publishing Company & PWN.
Coniglione, Francesco, Roberto Poli & Jan Woleński (eds.) (2003), Polish Scientific Philosophy: The Lvov-Warsaw School. Amsterdam: Rodopi
Czeżowski, Tadeusz (1953). On discussion and dicussing. [In:] [Czeżowski 2000], pp. 60-67.
Czeżowski, Tadeusz (1954). On logical culture. [In:] [Czeżowski 2000], pp. 68-75.
Czeżowski, Tadeusz (1963). The classification of reasoning and its consequence in the theory of science. [In:] [Czeżowski 2000], p. 119-133.
Czeżowski, Tadeusz (2000). Knowledge, Science, and Values. A Program for Scientific Philosophy. Amsterdam: Rodopi.
Jadacki, Jacek (2009). Polish Analytical Philosophy, Warszawa: Wydawnictwo Naukowe Semper.
Kotarbiński, Tadeusz (1947). A survey of logical and semantic problems. [In:] [Kotarbiński 1961/1966], pp. 403-409.
Kotarbiński, Tadeusz (1961). Gnosiology. The Scientific Approach to the Theory of Knowledge. Wrocław & Oxford 1966: Ossolineum & Pergamon Press.
Łukasiewicz, Jan (1912). Creative elements in science. [In:] [Łukasiewicz 1970], pp. 1-15.
Łukasiewicz, Jan (1936. Logistic and philosophy. [In:] [Łukasiewicz 1970], pp. 218-235.
Łukasiewicz, Jan (1937.) In defence of logistic. [In:] [Łukasiewicz 1970], pp. 236-249.
Łukasiewicz, Jan (1970). Selected works. Amsterdam & Warszawa: North-Holland Publishing Company & PWN.
Tarski, Alfred (1933) The concept of truth in formalized languages. [In:] Logic, Semantics, Metamathematics. Papers from 1923-1939. Indianapolis (IN) 1983: Hackett, pp. 152- 278.
Twardowski, Kazimierz (1921). Symbolomania and pragmatophobia. [In:] [Twardowski 1999], pp. 261-270.
Twardowski, Kazimierz (1999). On Actions, Products and other Topics in Philosophy. Amsterdam: Rodopi.
Twardowski, Kazimierz (2014). On Prejudices, Judgments, and other Topics in Philosophy. Amsterdam: Rodopi.
Witwicki, Władysław (1938). Co to jest dyskusja i jak ją trzeba prowadzić [What is discussion and how to conduct it]. Lwów, Lwowska Biblioteczka Pedagogiczna.
Woleński, Jan (1989). Logic and Philosophy in the Lvov-Warsaw School. Dordrecht: D. Reidel Publishing Company.
ABSTRACT. Faced with the antirealist threat represented by the pessimistic meta-induction -one of the main arguments against traditional realism-, new versions of realism cease to conceive the truth of theories as a block but instead they address specific aspects of them. If it were possible to show that, through theoretical change, certain parts of past theories survived in our current theories, the realist could avoid the drastic consequences of the antirealist argument. This is exactly the direction assumed by selective realisms, among which it can be mentioned entity realism (Hacking 1983; Cartwright 1983), structural realism (Worrall 1989; Ladyman 1998; French 1999) and dispositional realism (Chakravartty 1998; 2007; 2013; 2017). The latter was previously labeled semirealism, which is the topic of the present paper.
According to Chakravartty, dispositional realism -unlike entity realism and structural realism- is more promising regarding the direction that contemporary scientific realism should take from now on. Indeed, semirealism not only is immune to the criticisms raised against both entity realism and structural realism, it also offers, among others, two additional virtues, namely: (a) it allows the unification of entity realism and structural realism, two positions originally presented as incompatible views; (b) it offers an integrated picture of three metaphysical concepts: causation, laws of nature and natural kinds. We will focus on these two explanatory virtues.
Chakravartty claims that the unifying and explanatory character of dispositional realism is the main argument in favor of the existence of dispositions. Dispositions are those causally relevant properties that make things behave the way they do under certain circumstances. Dispositions are essentially modal properties.
The first virtue is based on a distinction made between detection properties, on the one hand, and auxiliary properties, on the other. We examine and discuss the foundations of this distinction. We conclude that the alleged unification of entity realism and structural realism does not work: on the one hand, dispositional realism fails to overcome the objections formulated to entity realism and, on the other hand, it leaves out the most compelling feature of structural realism.
The second virtue attributed to dispositional realism, as we have just remarked, is the unification of three metaphysical concepts: causal powers, laws of nature and natural kinds. Chakravartty holds that this metaphysical unification contributes to give plausibility and economy to his dispositional view of realism. We critically analyse the epistemological function of each concept and conclude that the first two (causal powers and laws of nature) manage to perform the satisfactory work that a compelling version of selected realism requires. But regarding natural kinds we offer epistemological reasons to show that, in the framework of dispositional realism, they are dispensable. In other words, all we explain through natural kinds, we can also explain by applying only dispositions and causal laws. So, the postulation of natural kinds far from helping to the conceptual economy of dispositional realism produces an ontological inflation instead.
We end by some final remarks about the possibility of defending a more deflationary conception of realism which can satisfy, anyway, the main demands of a scientific realist.
References
Cartwright, N. (1983). How the Laws of Physics Lie. Oxford: Clarendon.
Chakravartty, A. (1998). Semirealism. Studies in History and Philosophy of Science 29: 391-408.
Chakravartty, A. (2007). A Metaphysics for Scientific Realism: Knowing the Unobservable, Cambridge: Cambridge University Press.
Chakravartty, A. (2013). Dispositions for Scientific Realism. In R. Groff & J. Greco (eds.), Powers and Capacities in Philosophy: The New Aristotelianism, New York: Routledge: 113-127.
Chakravartty, A. (2017). Scientific Ontology: Integrating Naturalized Metaphysics and Voluntarist Epistemology, Oxford University Press.
French, S. (1999). Models and Mathematics in Physics: The role of group theory, in J. Butterfield and C. Pagonis (eds), From Physics to Philosophy, Cambridge University Press: 187-207.
Hacking, I. (1983). Representing and Intervening. Cambridge: Cambridge University Press.
Ladyman, J. (1998). What Is Structural Realism? Studies in History and Philosophy of Science 29: 409– 424.
Worrall, J. (1989). Structural Realism: The Best of Both Worlds? Dialectica 43: 99– 124.
14:30
Vassilis Livanios (Department of Classics and Philosophy, University of Cyprus, Cyprus)
Can categorical properties confer dispositions?
ABSTRACT. Gabriele Contessa (2015) argues that what he calls the Nomic Theory of Disposition Conferral (NTDC) (according to which, in each world in which they exist, properties confer specific dispositions on their bearers; yet, which disposition a property confers on its bearers depends on what the (contingent) laws of nature happen to be) is incoherent. On the basis of this result, he claims that only powers (that is, genuine dispositional properties irreducible to categorical bases) can confer dispositions on their bearers. In this paper, I examine a potential challenge to any realist view about categorical features which can be based on Contessa’s conclusion: Let us first assume that NTDC is the only account of disposition conferral that fits the case of categorical features and, furthermore, is exclusively associated with them. Given this first assumption, Contessa’s conclusion is tantamount to saying that categorical features cannot confer dispositions because the only account of disposition conferral that is proper to them is incoherent. Let us further assume that all natural properties should confer dispositions in the minimal sense that some disposition ascriptions are true of their bearers. Given these assumptions and provided that the conclusion of Contessa’s argument is true, we have an argument against the existence of categorical features.
My aim in this paper is to undermine the above argument. To this end, and given that the aforementioned assumptions are considered as relatively uncontroversial by a number of metaphysicians, I shall focus on Contessa’s original argument and question two main claims of his argumentation: first, the claim that intuition can undeniably support an analogy between the (according to NTDC) role of laws in the determination of the dispositions that properties confer to their bearers and cases of mimicking in the literature about dispositions and, second, the claim that the intuition itself is supported by the fact that, in the context of NTDC, laws are extrinsic to objects. I further undermine the aforementioned analogy by presenting a possible interpretation of the role of laws in which, though extrinsic to objects, laws do not ‘bring about’ a scenario of disposition-mimicking. Given all these remarks I cast doubt on the two most significant premises of Contessa’s argument and, consequently, show that the conclusion he arrives (that is, only powers can confer dispositions) is controversial.
References (selected)
Armstrong, D.M. (1983). What is a Law of Nature? Cambridge: Cambridge University Press.
Baysan, U. (2017) Lawful Mimickers. Analysis 77(3), 488-494.
Bird, A. (2007). Nature's Metaphysics: Laws and Properties. Oxford: Clarendon Press.
Contessa, G. (2015). Only Powers Can Confer Dispositions. The Philosophical Quarterly 259, 160-176.
Cross, T. (2012). Goodbye, Humean Supervenience. In: Oxford Studies in Metaphysics, Vol. 7 by Karen Bennett and Dean W. Zimmerman (Eds.). Oxford: Oxford University Press, 129-153.
Langton, R. and Lewis, D. (1998). Defining `intrinsic'. Philosophy and Phenomenological Research 58(2), 333-345.
Livanios, V. (2017). Science in Metaphysics. Cham: Palgrave Macmillan-Springer Nature.
Mumford, S. (1998). Dispositions. New York: Oxford University Press.
Ott, W. (2009). Causation and Laws of Nature in Early Modern Philosophy. Oxford: Oxford University Press.
Values in Science: Ethical vs. Political Approaches
ABSTRACT. I take as my starting point the now widely accepted conclusion that doing science requires making value judgments. Philosophical work in value theory offers two very different ways of determining values: distinctly ethical and distinctly political approaches. These approaches often yield conflicting conclusions, since any meaningful commitment to democracy must involve deferring to the wishes of the public in at least some cases in which the public makes a normative error — i.e., reaches a conclusion that is ethically sub-optimal. Thus, it is important to consider whether the value judgments required by science should be grounded in ethical or political reasoning.
In the first part of the paper, I show that although this issue is rarely discussed explicitly, philosophers more commonly take an ethics-oriented approach to values-in-science questions. Douglas, for example, famously argues that scientists have the same moral responsibilities as the rest of us (ch. 4), and Elliott says that “the values influencing our research…should adequately represent fundamental ethical principles” (106). I show that the same can be seen even more clearly in discussions of specific value judgments — e.g. particular inductive risk decisions, discussions of the economic discount rate, and debates about the construction of QALYs. (The biggest exception to this is Kitcher. His work is, though, somewhat detached from much contemporary literature on values in science.)
STS scholars, on the other hand, much more commonly take a political approach. They, however, typically restrict themselves to thinking of scientists as political agents in a descriptive sense. They therefore rarely tie their discussions to work in normative political philosophy and political theory, which could be used to offer concrete recommendations to scientists about how they should navigate scientific value judgments, conceived politically.
In the second part of the paper, I try to more carefully clarify the difference between distinctly ethical and distinctly political approaches. Approaches rooted in ethics typically focus on substantive questions about which value judgements are correct or incorrect, or about which are well- versus poorly-supported by reasons. Approaches rooted in political philosophy typically work from a different set of concepts, setting aside substantive analysis of the values in question in favor of concerns with procedure, legitimacy, representation, accommodation for opposing viewpoints, accountability, and so forth.
I (in contrast to most philosophers) favor a more political approach to most values-in-science questions, and (moving beyond most STS scholars) believe that the tools of normative political philosophy and theory can tell scientists how they ought to navigate such cases. And so in the third part of the paper I consider what role the distinctly ethical arguments typical of philosophers can have, given a political approach to values in science. I conclude that they can serve at least two valuable roles. First, they can be seen as (mere) persuasive exercises: arguments that we might hope will move our fellow citizens to adopt a particular point of view on issues of value. Second, and more interestingly, liberal political philosophies since Rawls have relied on a distinction between reasonable and unreasonable pluralism: values that lie within the range of reasonable disagreement are owed a certain kind of respect and are eligible for adoption through democratic processes. Values outside that range are not. The substantive ethical arguments of philosophers, I argue, can help us see what lies within versus outside the range of reasonable disagreement. I conclude by offering suggestions about how philosophers’ (ethical) arguments can be modified to better fulfill these roles.
Douglas, H. (2009) Science, Policy, and the Value-Free Ideal. University of Pittsburgh Press.
Elliott, K. (2017) A Tapestry of Values. Oxford University Press.
14:30
Victor Hugo Pinto (Posgraduation program in education of Federal Fluminense University, Brazil)
The Problem of Scientific-Epistemological Racism and the Contributions of Southern Global Epistemologies in the Construction of Paradigmatic Transformations of the Philosophy of Science
ABSTRACT. This paper addresses the issue of scientific racism and the impacts of epistemological boundaries in the construction of research. The objective is to understand the relevance of the epistemologies of the Global South to the paradigmatic transformations in the positivist models of the scientific method that was in force until the second half of the 20th century. This role is motivated by the fact that there are demands that force the field of scientific production to revise certain concepts in favor of solutions to the current contradictions. In a socio-political and economic scenario that divides the world into epistemological boundaries between the global North and South that contribute significantly to the maintenance of scientific segregations that have repercussed until nowadays in the methodologies used in the process of construction of scientific research. The methodology is bibliographical and seeks to problematize the distance between the representation and the objective reality of the fact researched, from this point we understand scientific racism as a product of this distance. The paper seeks contributions in the thinking of Boaventura de Souza Santos, Maria Paula Meneses to understand the scenario of the new demands that contemporary society is bringing and the challenges that science assumes in these issues. In addition to these ideas, the contribution of Fritjof Capra and Walter Mignolo's thinking is sought to understand the changes that are occurring in the processes of scientific knowledge construction. The results elucidate that contemporary society faces serious challenges for the formation of a sustainable future that is indispensable for the survival of the human species. Faced with the challenges of a complex society, where factors and phenomena are interconnected, new demands for the survival of the human being emerge and the old Cartesian and positivist models do not sustain themselves and fail to give effective answers to this question. Based on these questions, the research seeks alternatives in the knowledge of the peoples of the Global South who have different understandings about the relationship between the human being and nature that can contribute significantly to cross the epistemological and scientific frontiers in favor of the construction of a science that has an active role in shaping a sustainable society. As it deals with the theme of the influence of social transformations and their impacts on the philosophy of science, it deals with a theme relevant to the historical aspects of the philosophy of science in the contemporary world.
REFERENCES
CAPRA, Fritjof, LUISI, Pier Luigi. (2014). The systemic vision of Life: a unified vision and its philosophical, political, social and economic implications. São Paulo: Cultrix editions.
CAPRA, Fritjof. (2015). The mutation point. São Paulo: Cultrix editions.
GINZBURG, Carlo (2001). Madeira Eyes -Nove Reflections on Distance. São Paulo: editions Companhia das Letras.
MIGNOLO, Walter D. (2008) Epistemic disobedience: the decolonial option and the meaning of identity in politics. UFF Research books - Dossier: Literature, language and identity, no 34, p. 287-324.
SANTOS, Boaventura de Sousa; MENESES, Maria Paula. (Orgs.). (2010). Epistemologies of the South. São Paulo; Editora Cortez.
SANTOS, Boaventura de Souza (2007). Beyond abyssal thinking: from global lines to an ecology of knowledge. Review new studies. Available at http://www.scielo.br/pdf/nec/n79/04.pdf.
____________________. (2002). For a sociology of absences and a sociology of emergencies. Critical Journal of Social Sciences. Vol. 63, p. 237-280. Available at: http://www.boaventuradesousasantos.pt/media/pdfs/Sociologia_das_ausencias_RCCS63.PDF.
____________________. (2008). A discourse on the sciences. 5 ed. São Paulo: Editions Cortez.
Chia-Hua Lin (University of South Carolina, Taiwan)
The Increasing Power of Chomsky Hierarchy: A Case Study of Formal Language Theory Used in Cognitive Biology
ABSTRACT. Interdisciplinary research programs have become a significant force in knowledge generation. An example is cognitive biology, one strand of which emerges from applying Chomsky hierarchy, i.e., a classification of formal languages, in experimental psychology and neurolinguistics. This paper is a philosophical analysis of the augmenting effect on the role that the Chomsky hierarchy plays through the cross-disciplinary transfer and interdisciplinary development.
Originated in linguistics for studying natural languages, Chomsky hierarchy was constructed to classify mathematically defined languages based on the generative power of their grammars and the computing power of the abstract machine (i.e., automaton) that is required to parse the expressions.[1][2] The construction of such a hierarchy drew mathematically inclined theorists to the study of formal languages, which eventually became a crucial, theoretical component of computer science.[6] Recently, biologists have started applying Chomsky hierarchy in the design of artificial grammar learning experiments for probing the cognitive infrastructures in human and nonhuman animals.[3][4][5]
Using the applications of Chomsky hierarchy in those aforementioned disciplines as examples, this paper analyzes three different roles it plays as a classification system, i.e., explanatory, engineering, and explorative. This paper then argues that unlike in linguistics or computer science, scientists in cognitive biology make use all three of these roles in their applications of the hierarchy.
For instance, in linguistics, Chomsky hierarchy is typically used for an explanatory purpose, e.g., providing explanations for particular linguistic phenomena, such as ambiguity. In computer science, it is used for an engineering purpose, e.g., developers adhere to Chomsky hierarchy in their design of programming languages and compilers. Lastly, in neurolinguistics, it is used for an explorative purpose, e.g., when scientists use it to locate neural substrates of linguistic ability.
In cognitive biology, scientists (i) design experiments, (ii) explain the differences in cognitive infrastructure between humans and nonhuman subjects based on the results of the experiments, and (iii) investigate the neural substrate for such differences, all according to Chomsky hierarchy. This paper concludes by suggesting that the augmentation of the knowledge-producing roles of Chomsky hierarchy is a result of two sources: its cross-disciplinary transfer, especially from linguistics and computer science to biology, and the interdisciplinary development of it, particularly between experimental psychology and neurolinguistics.
[1] Chomsky, Noam. 1956. “Three Models for the Description of Language.” IRE Transactions on Information Theory 2.
[2] ——————— 1959. “On Certain Formal Properties of Grammars.” Information and Control 2: 137–67.
[3] Fitch, Tecumseh, and Marc Hauser. 2004. “Computational Constraints on Syntactic Processing in a Nonhuman Primate.” Science 303 (January): 377–80.
[4] Friederici, Angela D., Jörg Bahlmann, Stefan Heim, Ricarda I Schubotz, and Alfred Anwander. 2006. “The Brain Differentiates Human and Non-Human Grammars: Functional Localization and Structural Connectivity.” Proceedings of the National Academy of Sciences of the United States of America 103 (7): 2458–63.
[5] Gentner, Timothy, Kimberly Fenn, Daniel Margoliash, and Howard Nusbaum. 2006. “Recursive Syntactic Pattern Learning by Songbirds.” Nature 440 (7088): 1204–7.
[6] Ginsburg, Seymour. 1980. “Methods For Specifying Families Of Formal Languages — Past-present-future.” In Formal Language Theory, 1–22.
ABSTRACT. The common view is that the sciences emerged largely since the 17th century in Europe (e.g. Cromer 1995; Logan 1986). That standard view of science has been maintained because, in studying the question of the origin and foundations of the sciences, researchers have focused commonly on proximate factors (or causes) as explanations: including the printing press, the work of Bacon, Galileo and Newton, and the like. Some scientists and philosophers have looked deeper than the proximate factors since the 17th century by attempting to identify the underlying foundational factors that have allowed us to develop the sciences in the first place. Leading cognitive scientists such as Daniel Dennett (1991) argues that a massive reprogramming of the mind made the sciences possible. Leading evolutionary psychologists Pinker (2010) and Cosmides and Tooby (2013) have stressed features of the mind such as our ability to use our folk physics to deal with the world around us. I however provide a methodological account of the fundamental factors that have helped make the sciences possible, focusing in particular on our biological and cognitive capacities for systematic observation, problem-solving, basic experimentation etc. which we largely developed before the Paleolithic.
Selected references
Atran, Scott; Douglas Medin. 2008. Introduction. In: The Native Mind and the Cultural Construction of Nature. Cambridge, Massachusetts: MIT Press.
Carruthers, P. 2002. The roots of scientific reasoning: infancy, modularity and the art of tracking. In: The Cognitive Basis of Science. Cambridge University Press.
Cosmides, L; J. Tooby. Evolutionary psychology: new perspectives on cognition and motivation. Annu Rev Psychol, 64 (2013), pp. 201-229.
Dennett, D. 1991. Consciousness Explained. London: Allen Lane.
Giere, R. 1987. The cognitive study of science. In N. Nersessian (Ed.), The process of science. Springer.
Logan, R. 1986. The alphabet effect. New York: Morrow.
Mithen, S. 2002. Human evolution and the cognitive basis of science. In: The Cognitive Basis of Science. Cambridge University Press.
Pinker, S. The cognitive niche: coevolution of intelligence, sociality, and language. Proc Natl Acad Sci, 107 (2010), pp. 8993-8999).
Toth, N. et al. 1993. Pan the tool maker: investigations into the stone tool-making and tool-using capabilities of a bonobo (Pan paniscus). Journal of Archaeological Science.
Zvelebil, M. 1984. Clues to recent human evolution from specialized technology. Nature.
An Explanatory View of Individuating Natural Kinds
ABSTRACT. I argue that the boundaries of homeostatic property cluster kinds (HPC-thesis by Boyd 1989) can be drawn based on the explanatory power of the producing and maintaining mechanisms. The HPC-thesis has become the standard view of natural kinds in the special sciences. According to the thesis, natural kinds consist of property clusters produced or maintained by homeostatic mechanisms. However, the HPC-thesis has been criticised for not offering non-conventional ways to draw the boundaries of mechanisms (Craver 2009). I argue that a pluralistic view of natural kinds, which is based on the explanatory power of the homeostatic mechanisms, dissolves these problems. In short, mechanistic models with robust explanatory powers are grounded on stable kinds, which in turn enable rich inductive generalizations. However, when different classifications based on mechanisms target the same explanandum, they may have explanatory trade-offs. Hence, the causal structure of the world can be sliced into natural kinds in different ways. In defence of my view, I provide examples of psychiatric classifications, and analyse their explanatory powers based on the contrastive-counterfactual theory of explanation (Ylikoski 2001, Woodward 2003). According to the theory, explanations come with different explanatory virtues that, in turn, can be measured by the explanation’s ability to answer relevant counterfactual what-if-questions.
REFERENCES
Craver, Carl 2009: Mechanisms and Natural Kinds. Philosophical Psychology, 22, 575-594.
Boyd, Richard 1989: Kinds as the “Workmanship of Men”, Realism, Constructivism, and Natural Kinds. In Julian Nida-Rümelin (edit.): Rationalit’t, Realismus, Revision. Berlin: Walter de Gruyter.
Woodward, James 2003: Making Things Happen. A Theory of Causal Explanation. Oxford: Oxford University Press.
Ylikoski, Petri 2001: Understanding Interests and Causal Explanation. University of Helsinki. http://ethesis.helsinki.fi/julkaisut/val/kayta/vk/ylikoski
14:30
Leen De Vreese (Centre for Logic and Philosophy of Science, Ghent University (UGent), Belgium)
Risk factors, explanation and scientific understanding
ABSTRACT. The notion risk factor is omnipresent in contemporary medical research, medical practice (e.g. prevention campaigns) and layman understanding of health and disease. This is a recent phenomenon, in the sense that it started in the 1950s. During the second half of the 20th century and the first decade of the 21st century, talk in terms of risk and risk factors has become ever more pervasive. Nevertheless, the work of medical scientists and sociologists of medicine shows that there is no consensus about how the term is best used. In general, four different meanings of the notion “risk factor” can be discerned in the literature:
- Risk factor0 = any factor associated with the development of a given disease.
- Risk factor1 = risk factor0 considered to be a cause of the disease.
- Risk factor2 = risk factor0 of which it is not known whether it is a cause of the disease or not.
- Risk factor3 = risk factor0 thought not to be a cause of the disease.
Distinguishing these meanings is important because the value of risk factor knowledge may differ depending on the kind of risk factor involved. In my talk I will use this fourfold distinction as a basis for my analysis of whether and how risk factors can explain a disease and whether and how they provide scientific understanding. Given that causal factors are generally taken to have explanatory power, it seems uncontroversial to claim that type 1 risk factors explain and thereby provide scientific understanding. The interesting question is whether and how this extends to type 2 and 3 risk factors. Do they explain? If so, in what sense? And what do they explain? Additionally, do they provide scientific understanding (with or without explanation)? And again: if so, in what sense?
As a starting point of my analysis, I will take the possibility that non-causal risk factors somehow explain seriously. However, this implies that I also need to develop an account of what makes non-causal risk factors explanatory powerful, without being causal. I will try to do that by shifting my attention away from (causally) explaining the onset of a disease to explaining differences in chances. Understanding why person a has a higher chance of getting breast cancer than person b may require that we have knowledge about probabilistic dependency relations in the world, without these relations being causal. I will explore whether taking this route helps us further in getting a grasp on whether and how non-causal risk factors can explain and/or provide scientific understanding.
ABSTRACT. Mysterianism has become widespread in the philosophy of language, mind and science since the 1970s. Noam Chomsky (1976) used the term “impenetrable mysteries”, Jerry Fodor (1983) spoke about “epistemic boundedness”, Colin McGinn (1989) introduced “cognitive closure”. Mysterianism is an epistemic stance claiming that some of the problems science deals with are beyond human cognitive capacities and therefore will never be solved. In my paper I will try to show that mysterianism is not a form of scepticism, but a display of epistemic defeatism. There are three main problems with mysterian arguments. Firstly, mysterians’ argumentative strategies are defective, because they are based on appeal to ignorance. Secondly, there might be boundaries for human knowledge, but they cannot be subject to some kind of philosophical futurology – they will be exposed by normal scientific practice. Thirdly and most importantly, mysterianism is inconsistent with the division of cognitive labour and ignores the social aspect of empirical science. The emergence of language enabled cooperation in the area of knowledge, therefore most of contemporary scientific theories cannot be grasped by a single mind. If we distinguish between individual and collective rationality, the mysterians’ concerns about the fundamental explanatory impossibility of certain phenomena will prove to be unfounded.
Chomsky, Noam. ‘Problems and Mysteries in the Study of Human Language’. In Language in Focus: Foundations, Methods and Systems - Essays in Memory of Yehoshua Bar-Hillel, edited by Asa Kasher, 281–357. Dordrecht: D. Reidel Publishing Company, 1976.
Dennett, Daniel C. From Bacteria to Bach and Back: The Evolution of Minds. New York: W. W. Norton & Company, 2017.
Flanagan, Owen J. The Science of the Mind. 2nd ed. Cambridge: MIT Press, 1991.
Fodor, Jerry A. The Modularity of Mind: An Essay on Faculty Psychology. Cambridge: MIT Press, 1983.
Horgan, John. The End of Science: Facing the Limits of Knowledge in the Twilight of the Scientific Age. New York: Basic Books, 2015.
Kitcher, Philip. The Advancement of Science: Science Without Legend, Objectivity Without Illusions. Oxford: Oxford University Press, 1993.
Kriegel, Uriah. ‘The New Mysterianism and the Thesis of Cognitive Closure’. Acta Analytica 18, no. 30–31 (2003): 177–91.
McGinn, Colin. ‘Can We Solve the Mind-Body Problem?’ Mind 98, no. 391 (1989): 349–66.
Vlerick, Michael, and Maarten Boudry. ‘Psychological Closure Does Not Entail Cognitive Closure’. Dialectica 71, no. 1 (2017): 101–15.
Walton, Douglas. ‘The Appeal to Ignorance, or Argumentum Ad Ignorantiam’. Argumentation 13, no. 4 (1999): 367–77.
An empirical challenge for Scientific Pluralism – Alternatives or Integration?
ABSTRACT. Scientific pluralism has become an increasingly popular position in the philosophy of science and in the philosophy of biology in particular. One shared notion among scientific pluralists is that some or all natural phenomena require more than one theory, explanation or method to be fully understood. A pluralist claim might be metaphysical, epistemological or both. There is yet another distinction within pluralist positions which is often overlooked. Some pluralists argue that several theories or explanations should be integrated, to understand particular phenomena (e.g. Mitchell, 2002). Other pluralists rather treat different theories and explanations as alternatives (e.g. Kellert, Longino and Waters, 2006). One question yet remains: Does this distinction address the “nature” of the respective phenomena? And, consecutively: Are there genuine cases of “alternative” or “integrative” pluralism?
In this talk I challenge this perspective and argue that it is not possible to uphold the distinction of alternatives vs. integration. Using “extended inheritance” as a case study, I show how research traditions, rather than the nature of specific phenomena decide whether theories or explanations are treated as alternatives or integratable. In addition, I emphasize that heterogeneity within research fields is often neglected in respect to this question. First, I will introduce the case of small RNA inheritance and propose a pluralist interpretation: Small RNAs are a class of biomolecules that are responsive to environmental stimuli and can transmit this information to subsequent generations. It was recently argued that they operate through a use / disuse paradigm (Veigl, 2017), thus presenting an instance of Lamarckian inheritance. I will advocate pluralism of theories of inheritance. Subsequently, I will ask how to relate Lamarckism and (Neo-)Darwinism: are they alternatives, or integratable?
To answer this question I will introduce an actor-based model for the emergence and decrease of plurality within one scientific field. For the case of “extended inheritance” I point to assymetries within the research field: Proponents of Neo-Darwinism are “singularists” as they only accept one theory. Defenders of “extended inheritance” accept Neo-Darwinism along with a Lamarckian theory, thus they are “dualists”. Although defenders of “extended inheritance” work with more than one theory and thus create plurality within their research field, they do not have a clear (neither epistemological, nor metaphysical) attitude towards this state of plurality. In conclusion, this talk will raise several problems for scientific pluralism: integration vs. alternatives; singularism vs. dualism and plurality vs. pluralism. I will provide a toolbox to address these issues and assess the trajectories of these problems within the respective scientific disciplines. In so doing, I aim at developing an approach towards scientific pluralism which is responsive towards trends within research fields.
Literature:
Kellert, S. H., Longino, H. E., & Waters, C. K. (Eds.). (2006). Scientific pluralism [electronic resource] (Vol. 19). U of Minnesota Press.
Mitchell, S. D. (2003). Biological complexity and integrative pluralism. Cambridge University Press.
Veigl, S. J. (2017). Use/disuse paradigms are ubiquitous concepts in characterizing the process of inheritance. RNA biology, 14(12), 1700-1704.
ABSTRACT. Any serious discussion of policy—say of further and higher education—is unlikely to progress far without someone asking, ‘What do you mean by…?’ or saying, ‘We need some definitions for our report’. Indeed, without some degree of agreement about what an item under discussion is, the discussion is unlikely to be productive. It will miss issues—or fudge them. Educational policy and organisation necessarily utilises ‘levels’—higher education is ‘higher’ than further education, level 4 is ‘higher’ than level 3, etc. But there are no litmus tests of levels. Instead, open-textured language is used to ‘define’ them. Sometimes that language reflects a consensus amongst those who write or speak the words and those who read or hear them. Sometimes the words obscure deep disagreements. Sometimes disagreements emerge over time as social changes erode or shake the previous consensus. Failure to appreciate such difficulties undermines the utility of much educational debate whether amongst policy-makers or those who teach. Too often will participants ‘talk past each other’. Too often will they fail to identify and engage with the real issues. In this context, the paper will explore the following: 1. the problems of once-and-for-all definition of evaluative terms, using examples from the work of W B Gallie, J L Austin, Ludwig Wittgenstein, Ronald Dworkin, Daniel Dennett, David Miller 2. the Humean insight that reason is ever ‘the slave of the passions’ and modern developments in neuro-psychology and neuro-philosophy that adopt a two-stage approach to the process of evaluation—distinguishing the rapid and automatic from reflective (and often ex post justificatory) reasoning (sources here include the work of Jonathan Haidt. Thomas Scanlon and Daniel Kahneman) 3. the notion of ‘lexical priming’ through shared experience 4. the distinction between words that are necessarily clear (e.g. triangle) and words that are more open-textured (e.g. triangular, reasonable, analytical 5. the UK ‘official’ definitions of levels 6. the notion of ‘vocationality’ as an indicator of that which is merely ‘further’ rather than ‘higher’ 7. Bloom’s taxonomy, ‘soft’ and ‘hard’ skills 8. learning by doing.
ABSTRACT. This paper will analyze the social reality by taking the enterprise as an example. As the enterprise is a social reality which exists in a collective way, rather than an individual way, the analysis starts from a controversial issue that whether one can accept the group as a social reality. The point at issue is actually the argument between methodological individualism and methodological holism.
Historically the Old Institutional Economics (OIE), for example Thorstein Veblen, has strongly advocated the holism. However, when the New Institutional Economics (NIE) grows up, it turns to the individualism. Malcolm Rutherford concludes that “just as holism is the professed methodology of the OIE, so individualism is the professed methodology of the NIE”. [1] The shift in methodology of economics illustrates the methodological individualism has a strong position in Western social sciences camps, from economics to philosophy and sociology.
In theory, the methodological individualism can take the superiority in terms of a hypothesis that seems cannot be refuted, that is only individuals have intentions, not groups. The methodological individualism intends to reduce the “collective intentionality” to “individual intentionality” plus something else. However, this hypothesis was rejected by a number of philosophers, such as John R. Searle and Raimo Tuomel. Searle denied the possibility that collective intentionality can be reduced to individual intentionality, and explained the collective intentionality as the “intentionality in the form of the first person plural as much as we can have it in the form of the first person singular”. [2] Tuomel argued that “many social and collective properties and notions are collectively man-made”, and “sociality is in many cases created through collective acceptance”. [3, p. 123] According to him, “we-mode collective commitment may be based on subjective commitment (involving only personal normative thoughts) and beliefs about others, or it can be interpersonal commitment (involving appropriate interpersonal norms or normative thoughts), or it can be objective commitment (viz. commitment in the public space based objective norms, epistemically available to anyone).” [3, p. 133]
Searle and Tuomel had greatly contributed to the studies on the social reality, but their focus is on the relationship of exchange and intentionality. It leads that Searle’s example is money and Tuomel’s is squirrel pelt which has played a money role in Finland. When turning to the interpersonal relationships in production, this paper will choose the enterprise, the cell in the modern production, as an example.
In general, this paper believes that the characteristic of the enterprise as a social reality in three respects. First, the institutional reality of contract. Various communities, such as the enterprise, family, and state, all have institutional characteristic. As far as the enterprise is concerned, it is widely acknowledged a nexus of contract in economics. Accordingly, the enterprise basically is the institutional reality of contract. Second, the reality of role structure. The enterprise is a collective consisting of individuals (or human being).When an individual becomes a member of the enterprise collective, he or she will be allocated to a specific post with the relevant rights and obligations. Then an individual turns into a special “role” in the community, and, in this sense, the enterprise is necessarily a “reality of role structure”. Third, the reality of physical facilities. The traditional approach to reality analysis always focuses on consciousness, language, and their relationship, instead of the “material facilities”, “material basis”, and “material condition”. However, without the physical office and production facilities, such as the factory building, machine and equipment, the enterprise cannot carry out the material production.
[1] Malcolm Rutherford. 1996. Institutions in Economics: The Old and the New Institutionalism. Cambridge University Press, p 43.
[2] John R. Searle. 1998. “Social ontology and the philosophy of society”. Analyse & Kritik, (20): 143-158.
[3] Raimo Tuomela. 2003. “Collective acceptance, social institutions, and social reality.” American Journal of Economics and Sociology, 62 (1): 123-165.
ABSTRACT. A chemical reaction is every interaction between molecules, ions or radicals in which chemical bonds are generated or broken, giving rise to new molecules. The initial molecules are called “reactants”, and the produced molecules are called “products”. Most reactions occur in solution; hence the characteristics of the solvent molecules and their interactions with the other participating molecules should be taken into account.
Chemical reactions are directed towards the production of a higher relative concentration of the most stable species. It is possible to infer which the most stable species is by contrasting the forces of the broken bonds and the formed ones, and the energy associated with them. Reactions are represented by means of energy diagrams or reaction profiles, in which the change of the potential energy during the reaction progress is drawn. The “reaction coordinate” represents the degree of progress in which the reactants become products.
Given the standard representation of reactions, it is often considered that they occur in a causal way. Since chemical reactions processes are traditionally conceived in terms of a causal framework –the relation involved in chemical reactions is understood as causality–, the link between the species is interpreted as successive. Such processes are viewed as if, firstly, the reagents interact and, after a while, they cause the appearance of the products. It is common to find this simplification in the lexicon of chemists as well as in textbooks. But if we intend to address the issue philosophically, wondering about the very nature of the chemical reactions, we can ask for the reasons for maintaining a causal picture beyond the context of teaching and professional practice.
In opposition to the widely accepted causal interpretation, in this paper we will argue that chemical transformations can be more appropriately elucidated within a framework rooted in the category of reciprocal action, inspired in the Kantian notion. While causality is marked by succession, reciprocal action must be interpreted in terms of simultaneity. When the mechanisms involved in chemical reactions are analysed, it is necessary to take into account the interactions with the solvent, the formation of intermediary reactants, the productions of parallel reactions and simultaneous reactions, the formation of dynamic equilibrium, and so on. It is also important to bear in mind that all participants (no matter how low their concentration is) are relevant in the kinetics and thermodynamics associated with the reaction. Our main purpose is to discuss whether the Kantian category of reciprocal action is philosophically fertile to account for the nature of chemical reactions.
14:30
Myron A Penner (Trinity Western University, Canada) Amanda Nichols (Oklahoma Christian University, United States)
Since the beginning of modern chemistry in the 19th century, chemists have represented molecules with various figures and shapes. Towards the latter half of the 19th century, chemists moved from representing molecules as two dimensional planar objects, to three dimensional objects with a particular orientation in space. While there have been evolving models of the chemical bond, many chemists still think about molecular structures in realist terms. That is, many chemists think that molecular models as depicted by contemporary chemistry, are true or approximately true representations of molecular structures (Hendry, 2012). However, some argue that quantum field theory provides a powerful objection to realism about molecular structures. In our paper we set out to do two things.
First, we outline the basic contours of the standard chemical theory of molecular structures, or “structure theory.” Structure theory is a basic framework for understanding molecules, their constituent parts, and the nature of the chemical bond, and provides a robust theoretical basis for making predictions and explaining disparate phenomena. We go on to explore some of the heavy lifting done by structure theory, including applications in molecular symmetry and molecular orbital theory. We conclude this section by arguing that structure theory and its applications provide a strong, empirically informed philosophical argument for “molecular realism” (i.e. realism about molecular structures).
Second, we consider an objection to molecular realism based on quantum theory. Quantum theory recognizes the wave-particle duality of matter and provides a framework to describe how particle energy is quantized--that is, how particles absorb and release energy in discrete amounts. The Schrodinger Equation in quantum theory describes the wavefunction of a system, including particles within the system. Chemists tend to utilize the Born Interpretation according to which the amplitude of the wave function is interpreted as a probability distribution. Philosophers and scientists have considered the implications of quantum theory for realism. For example, Richard Dawid (2018) notes, “In a number of ways, high-energy physics has further eroded what had survived of the intuitive notion of an ontological object that was badly damaged already by quantum physics and quantum field theory.” Similarly, Hasok Chang (2016) observes that while “numerous experimental interventions rely on the concept of orbitals…[they] have no reality if we take quantum mechanics literally.” However, we argue that uncertainties about orbitals and the precise natures of subatomic particles does not undermine the plausibility of molecular realism. More specifically, we argue that our current understanding of molecular symmetry provides strong evidence for molecular realism that is able to withstand objections to realism based on quantum theory.
References
Hasok Chang, “Scientific Realism and Chemistry” in Essays in the Philosophy of Chemistry Ed. Eric Scerri & Grant Fisher, Oxford: Oxford University Press, 2016, 234-252.
Richard Dawid, “Scientific realism and high-energy physics” in The Routledge Handbook of Scientific Realism Ed. Juha Saatsi, New York: Routledge, 2018, 279-290.
Robin Findlay Hendry, “The Chemical Bond” in Philosophy of Chemistry Ed. Andrea I. Woody, Robin Findlay Hendry, Paul Needham, Oxford: Elsevier, 2012, 293-307.
Logical Approaches to Vagueness and Sorites Paradoxes
ABSTRACT. In this talk, I will regard some logical approaches to the problem of vagueness and respectively to sorites paradoxes and will analyse their advantages and disadvantages. In this connection, I will outline the main characteristics of subvaluationism, supervaluationism, fuzzy logic, relevant logic and fuzzy relevant logic and will point out which of them, according to me, is the most appropriate for the clarification of the above problem and for solving the sorites paradoxes.
The analysis will begin with consideration of some paraconsistent and paracomplete approaches as subvaluationism and supervaluationism. I will stress on their inadequacy concerning the discrepancy between the preliminary considerations on the problem of vagueness, which these logics attempt to solve and the desideratum to retain most aspects of classical semantics, which in their case do not make a good job. From first sight, it seems that subvaluationism and supervaluationism would propose good solutions for vague cases since they deny the principle of bivalence and assess the vague propositions as having the both truth values (true and false) for subvaluationism and, respectively, as a lack of truth values for supervaluationism. However, although the failure of bivalence these logics retain the law of non-contradiction and, respectively, the law of excluded middle; the conjunction and respectively disjunction are not truth-functional - they are sometimes truth-functional and sometimes they are not. For these reasons, as Hyde shows [1, p.73-95], subvaluationism is only weakly paraconsistent and supervaluationism is only weakly paracomplete. To me these two logics have used extra-logical prerequisites without giving a clear mechanism how to assess the formulas in the different cases and they propose informal arguments to solve a formal issue (the sorites paradox) without clear principles and rules. I will give arguments for this thesis.
After that the focus of my presentation will be directed to the intuitions accommodated by fuzzy logic, relevant logic (that is strongly paraconsistent and paracomplete) and Priest’s fuzzy relevant logic [4] as well as to their advantages and shortcomings.
As a whole, I will stress on the virtues of the last two logics and will regard which interpretations of their terms are sufficient to accommodate vague predicates and propositions.
I will also point out the inadequacy of the two forms of sorites paradoxes: as a sequence, using modus ponens and as mathematical induction. It is mainly due to the usage of a mixture of different sort of notions, different kind of dependences and different levels, which I will try to analyze.
References:
1. Hyde, D. (2008). Vagueness, Logic and Ontology. Burlington, VT: Ashgate.
2. Keefe, R. (2003). Theories of Vagueness, Cambridge University Press: Cambridge.
3. Paoli, Francesco A Really Fuzzy Approach To The Sorites Paradox in: Synthese 134: 363–387, 2003.
4. Priest, G. (2002). Fuzzy Relevant Logic, p.261-274 in “Paraconsistency. The Logical Way To The Inconsistent.” (Proceedings of The World Congress Held in Sao Paulo).
5. Williamson, Timothy Vagueness Routledge, 1994.
ABSTRACT. In this talk I discuss the following paradoxes (working in terms of propositions as truthbearers thoughout):
(i) The truth-teller: the paradox of a proposition that says of itself that it is true:
P = <P is true>
(ii) The no-no paradox: the paradox of two propositions, each of which says of the other that it is false:
P1 = <P2 is false>
P2 = <P1 is false>
In the case of the truth-teller, there are two possible assignments of truth-value, T or F, but it seems that there is no evidence
that tells us which is correct. Similarly, in the case of the no-no paradox, there are two possible assignments of truth-value
(P1 is T and P2 is F; or P1 is F and P2 is T) but no evidence that tells is which is correct. These examples can therefore be
called ‘paradoxes of underdetermination’. In this talk I discuss Graham Priest’s work on these paradoxes (Mortensen and
Priest 1981, Priest 2005, Priest 2006) and I make some critical remarks about the responses he proposes. Then I turn to the
recent account of truth and paradox given in Liggins forthcoming, which involves radically restricting the schema
(E) <p> is true iff p.
I explain how this approach deals with the paradoxes of underdetermination; and I argue that these proposed solutions have
advantages over Priest’s.
Works cited
Liggins, David (forthcoming). In defence of radical restrictionism. Philosophy and Phenomenological Research. https://
onlinelibrary.wiley.com/doi/full/10.1111/phpr.12391
Mortensen, Chris and Graham Priest, Graham 1981. The truth teller paradox. Logique & Analyse 24: 381–388.
Priest, Graham 2005. Words without knowledge. Philosophy and Phenomenological Research 71: 686–694.
Priest, Graham 2006. In Contradiction: A Study of the Transconsistent (second edition). Oxford: Clarendon Press.
ABSTRACT. Thomas Kuhn argued that scientific development should be understood as an ever-continuing evolutionary process of speciation and specialization of scientific disciplines. This view was first time expressed explicitly in The Structure of Scientific Revolutions. Kuhn kept on returning to it until the end of his life. In his last published interview, Kuhn laments that “I would now argue very strongly that the Darwinian metaphor at the end of the book [SSR] is right and should have been taken more seriously than it was” (2000, 307).
However, in my paper I do not focus on the evolution of Kuhn’s notion of evolutionary development of science as such, but study two of its significant consequences regarding scientific progress. The one is the resulting incoherence of science as a global cognitive venture. The other is the relation of incoherence with truth as an aim of science.
Kuhn remarked that “[S]pecialization and the narrowing of the range of expertise now look to me like the necessary price of increasingly powerful cognitive tools … “[T]o anyone who values the unity of knowledge, this aspect of specialization … is a condition to be deplored” (2000, 98). These words imply that the evolution of science gradually decreases the unity of science. Further, the more disunified science is, the more incoherent in total it is.
Kuhn rejected the idea that science converges on the truth of the world, or the teleological view of scientific development, in part because he saw it as a historically unsubstantiated claim. But is truth as an aim of science also conceptually incoherent in Kuhn? It seems evident that the evolutionary view of scientific development makes the goal of progressing towards the singular Truth with the capital T impossible. As Nicholas Rescher, for example, has argued, the true description of the world should form a maximally coherent whole or a manifestation of ideal coherence (Rescher 1973, 1985). But what if truth is seen as local, applicable in the specialized disciplines, so that they produce truths of the matters they are specialized in describing? Could science aim at producing a large set of truths about the world without the requirement of their systematic coherence? Science would be a collective of true beliefs without directionality or unity.
In brief, I study the relations between truth, incoherence and the evolution of science within the Kuhnian philosophical framework.
Bibliography
Kuhn, Thomas. 1970. The Structure of Scientific Revolutions. 2nd enlarged ed. Chicago: Chicago UP.
Kuhn, Thomas. 2000. The Road Since Structure. Chicago: Chicago Up:
Rescher, Nicholas. The Coherence Theory of Truth. Oxford: Oxford UP:
Rescher, Nicholas. “Truth as Ideal Coherence.” Review of Metaphysics 38: 795-806.
ABSTRACT. In this paper, I argue that there are historiographical and philosophical reasons to resist the idea that there have been sciences in the past. Here I draw on the insights from the historians of science. If there were no sciences in the past, it is difficult to see how the history of science could provide evidential support (or falsifications) for the philosophical theories of science. I examine different ways of understanding the relationship between the history and philosophy of science in the situation where the practices of the past cannot be judged as sciences. I argue that among the alternatives there are three main lines along which the philosophy of science may proceed. 1. We can study how science would have been different, had its history been different. 2. We can test philosophical accounts using counterfactual scenarios. The question is not whether an account captures what actually happened but what would have happened, had science proceeded in accordance with the account. 3. We can estimate the possible future developments of science by studying what factors behind the development of science could change either due to a human intervention or due to a change in other area of society. I point out that each of the lines 1–3 requires that counterfactual scenarios are built. Luckily, each of the lines can be shown to be a variation of the structure that is implicit in the explanations in the historiography of science. Moreover, I argue that this general structure if often implicit in more traditional case studies in the philosophy of science, and therefore the lines 1–3 are not too exotic despite the first impression. I conclude that the value of history of science is that it provides the materials to build the counterfactual scenarios.
ABSTRACT. Although many risks associated with climate change can be predicted with high levels of confidence, some predictions are made in conditions of ‘deep uncertainty’ – i.e. in the absence of precise probabilities. For instance, while many scientists anticipate that global warming might have runaway effects, there tends to be uncertainty about its pace and about tipping points in the climate system. Additionally, economic models which calculate the long-term costs and benefits of climate action face major uncertainty about variables such as future human welfare, our future ability to adapt to changing climates, and our future potential to geo-engineer the climate. As a result, to invoke a distinction common in the literature, in anticipating climate futures we often do not find ourselves in a context of ‘risk’ (with known probabilities), but in a context of ‘uncertainty’ (absent precise probabilities; sometimes referred to as ‘deep’ or ‘Knightian’ uncertainty).
In response to this epistemic predicament, some scholars have urged to refine our theoretical tools for assessing future possibilities, even when evidential probabilities are vague, unreliable or unknown (e.g. Betz 2010; Fox Keller 2015; Hansson 2016). One strategy for doing so – common in climate science – is to outline so-called scenarios about the future (Challiner at al. 2018). A drawback of possibilistic approaches, however, is that they tend to overreach, addressing many ‘mere possibilities’ that will never be actualized (Betz 2015). To serve as a useful theoretical tool, a minimal condition for what if-scenarios about climate futures is that they depict ‘real possibilities’ (Spangenberg 2018).
A similar call for focusing on ‘real possibilities’ has recently been advanced in discussions of the precautionary principle (e.g. Gardiner 2006). To serve as a plausible guide for decision-making in uncertainty, this principle should be restricted to what appear to be ‘real possibilities’: precautions should only be taken to avert bad outcomes that, to the best of our knowledge, might otherwise be actualized. Hence, a better epistemic grasp of ‘real possibilities’ is also relevant for ethical decisionmaking about climate change.
In this presentation I discuss early work which explores whether further advances can be made by differentiating between ‘real possibilities’ of different kinds. For instance, Katvaz et al. (2012) and Katvaz (2014) have proposed that we can rank different possibilities in terms of how ‘remote’ they are. But how can we assess the remoteness of different possibilities, absent appeal to evidential probabilities? Can such a ranking be made on purely possibilistic grounds? Or do all epistemic appeals to ‘real possibilities’ ultimately come down to probabilistic claims, as Roser (2017) suggests?
ABSTRACT. Drawing upon Haslanger’s work on the concepts of gender and race [2] as well as (rapidly) developing research program on conceptual engineering [1] we explore the possibility that skepticism regarding climate change could be addressed by targeted engineering of controversial concepts. Expressions like “global warming” and “climate change” which are central in debates about the current and the future state of our environment are politically/ideologically charged [3][4]. This we believe makes effective communication and public discourse at best difficult and at the worst impossible which in turn only exacerbates exaggerated skepticism toward scientific results about climate change.
Building on Haslanger’s idea [2] that we should define our concepts so that they serve best whatever purpose they are meant to serve, combined with the broader argument from conceptual engineering which calls for examination and revision of “defective” concepts we suggest that those environmental concepts which hinder effective communication may be defective, and if so, should be revised [1].
On Haslanger's analysis [2] we are to consider the pragmatics of the way we employ the relevant concepts (rather than attempting to explicating them). We should consider what practical (or cognitive) task they should enable us to accomplish and whether they are effective tools for accomplishing them. If the concepts under investigation are not adequate tools they are to be revised or replaced.
One way to approach this revision is through conceptual engineering. Cappelen [1] defines conceptual engineering as an activity concerned with how to assess and improve concepts/representational devices. Some of the central questions for this activity concerns how we can assess the adequacy of our representational devices and what strategies are available for amelioration. Using this as a platform we could assess and seek to ameliorate those environmental concepts which are most often disputed in public continuously threatening to increase distrust in scientific results. Such concepts may include climate change itself (specifically in the context of the move from ‘global warming’ as the central concern of environmentalists, to ‘climate change’. More generally, the concepts of ‘environment’ and ‘nature’ are open to question. We hypothesize that skepticism regarding climate change may be attributable at least in part to justified suspicion on the part of the public, to the meaning of the concepts commonly employed in this discourse.
References
[1] Herman Cappelen. Fixing Language: Conceptual Engineering and the Limits of Revision. Oxford: Oxford University Press, 2018.
[2] Sally Anne Haslanger. “Gender and Race: (What) Are They? (What) Do We Want Them To Be?”. Nous 34(1):31-55, March 2000
[3] Robert K. Kaufmann, Michael L. Mann, Sucharita Gopal, Jackie A. Liederman, Peter D. Howe, Felix Pretis, Xiaojing Tang, and Michelle: “Gilmore Spatial heterogeneity of climate change as an experiential basis for skepticism”. PNAS January 3, 2017 114 (1) 67-71;
[4] Hannah Schmid-Petri, Silke Adam, Ivo Schmucki,Thomas Häussler: “A changing climate of skepticism: The factors shaping climate change coverage in the US press”. Volume: 26 issue: 4, page(s): 498-513.
On the relations between visual thinking and instrumental practice in mathematics
ABSTRACT. Following the development of cognitive sciences over the past twenty years, several attempts have been made in the framework of the philosophy of mathematical practice to use the results of research in cognitive science in the interpretation of visual thinking in mathematics (see the papers in Mancosu, Jorgensen and Pedersen, eds. 2005). The most detailed among these attempts is the book of Marcus Giaquinto Visual thinking in mathematics: an epistemological study. Despite its many merits the book received a critical review (see Avigad 2009), which can be seen as a criticism of any attempts to use empirical results of cognitive science in the normative philosophy of mathematics. Even if criticisms of this sort are not rare (see Balaguer 1999, Lavine 1992, or Riskin 1994), it seems that they miss an important point—the use of representational tools in mathematics.
The aim of the paper will be to complement the cognitive interpretation of visual thinking in mathematics with its instrumental aspect. Pictures in mathematical texts are used as tools of visual representation. As such they have a semiotic dimension: in interpreting these pictures, conventions play a crucial role. These conventions often force us to interpret the picture in a way that is contrary to what is actually drawn on the paper. Thus the conventions require interpreting the intersection of two lines as a point even though we are looking at a small region. Such conventions establish the instrumental practice in which representations are subjected to manipulations and form the basis of logical inferences. In the framework of the instrumental practice the manipulations of representations and inferences made on their basis acquire a normative character. In the practice we distinguish a legitimate manipulation from an illegitimate one and a justified judgment from an unjustified one. The normative character of the instrumental practice is often ignored by the proponents of the cognitive approach to visual thinking and so comments of the critics are often justified.
We are convinced that the normative character of the visual representations is an integrative part of their use in mathematics practice. Recourse to this normative aspect of mathematical practice makes it possible to address the mentioned criticism.
References
Avigad, J. (2009): Review of Marcus Giaquinto, Visual thinking in mathematics: an epistemological study. Philosophia Mathematica 17, pp. 95-108.
Balaguer, M. (1999): Michael Resnik Mathematics as a Science of Patterns. Book Review. Philosophia Mathematica 7, pp. 108-126.
Giaquinto, M. (2007): Visual thinking in mathematics: an epistemological study. Oxford University Press, Oxford.
Lavine, S. (1992): Review of Maddy (1990), Journal of Philosophy 89, pp. 321-326.
Mancosu, P., Jorgensen, K. and Pedersen, S. (eds. 2005): Visualization, Explanation and Reasoning Styles in Mathematics. Springer, Dordrecht.
Riskin, A. (1994): On the most open questions in the history of mathematics: a discussion of Maddy. Philosophia Mathematica 2, pp. 109-121.
15:45
Arezoo Islami (San Fransisco State University, United States)
Who discovered imaginaries? On the Historical Nature of Mathematical Discovery
ABSTRACT. In the aftermath of Thomas Kuhn’s “The Structure of Scientific Revolutions”, which
established a new image of science, philosophers of mathematics also turned their attention
to the role of paradigms and the possibility of revolutions in mathematics (e.g.
Donald Gillies (ed.) Revolutions in Mathematics (1992)) What seemed to be needed
was a new historiography of mathematics analogous to the new historiography of science
proposed by Butterfield, Lovejoy and especially Alexandre Koyre.
Yet the case of mathematics proved to be more difficult and less fruitful than that
of modern physics. Mathematics, after all, presents us with a conceptual stability that
grounds all other objective (empirical) sciences (this is the core of the argument that
structural realists still use to counter Kuhn’s position). Mathematical entities seem to
have an existence outside of time that makes them immune to historical change as well
as to the mathematician’s mortal touch! In this paper, I argue that the history of mathematics,
if seen as a “repository for more than anecdote or chronology” can present us
with a fascinating and dynamic image of mathematics different from what appears to
us from textbooks and what the traditional schools in philosophy of mathematics have
proposed.
My methodology is based on the analysis of the case of imaginaries (square root
of negatives) from the 16th century on. While any number of examples can be used to
demonstrate the point about the dynamic nature of mathematics, imaginaries present a
particularly interesting case given their origins, the longstanding dispute on their status,
and their indispensable applicability in mathematics and other sciences. The goal
of this study is to show that the distinction that philosophers of science make between
seeing and seeing as is also of crucial importance in mathematics. Through examining
the problem of “identification”, I hope to pursuade philosophers of mathematics
about the dynamic nature of mathematics, similar to that of modern physics and other
empirical sciences.
The difference between Theoretical and Practical reason has a long history in philosophy. Modern discussions concentrate on the relation between know-how and knowing-that, and ask whether one of two reduces to the other, or, if not, what the nature is of know-how. During the last decades, practical scientists in the information and social sciences (management, psychology, and law) have recognized the need to discern ‘procedural or action means-end knowledge,’ which may often be paraphrased as follows: ‘if one wants to achieve goal G in (technical, medical, etc.) context C, perform action A.’ This type of explicit (intersubjective—not tacit), or normative action knowledge seems hardly to be directly deducible from declarative scientific knowledge. Nevertheless, it prominently precipitates in countless patents and valuable academic research projects aiming at means-end or intervention knowledge. Despite its fundamental importance it has escaped the attention of most epistemologists. The purpose of this Symposium is to draw attention to, discuss and foster further interest in the production and results of academic (explicit, action) means-end knowledge in engineering, medicine, management or any other branch of practical science.
Chair:
Sjoerd Zwart (University of Technology Delft, Netherlands)
Declarative and procedural knowledge: a recent mutation of the theory/practice duality and its significance in the era of computational science
ABSTRACT. The distinction between procedural and declarative knowledge takes shape in the early 70’s (cf. Winograd 1975), in the context of the nascent cognitive sciences and artificial intelligence programs. While it certainly has some likeliness with Ryle’s distinction between know-how and know-that (Fantl 2017) it also displaces and attenuates the opposition between the two traditional poles of knowledge. Both declarative and procedural knowledge are explicit and formal; the tacit, pre-linguistic, corporeal dimension of ‘know-how’ dear to Ryle having almost disappeared, they are seen as complementary: one cannot proceed without data, and data is meaningless without any guidance on its correct interpretation. (Simon 1984) concludes to their interchangeability: their mutual balance is set by pure computational efficiency considerations: having few procedures and lots of data may make sense in some contexts, while the reverse may be true in others.
We would like in this communication to trace back the history, significance and posterity of this relatively recent mutation of the theory / practice opposition. Grounded in a computational paradigm of knowledge, it tends to consider this opposition as technical rather than philosophical. Concepts are viewed as coding procedures of information, mediating inputs and outputs (e.g. (Dretske [1981] 1999) and representations as results of data compression. In the context of computational science (Humphreys [2004] 2007) and big data, this clearly tends to blur the distinction between theoretical science and technology: computerized procedures such as simulations are increasingly becoming the bread and butter of science, and some view the function of scientific theories as simple means to efficiently represent data about the world e.g. (Anderson 2008). Conversely, technology and engineering, being also computerized e.g. through Computer Aided Design (CAD), have never been so scientific and formal and away from their roots in the ‘know-how’ of the traditional crafts.
Could these trends be confirmation that the theory/practice opposition is outdated, at least in the realm of scientific knowledge? And that we should get rid of old, implicit representations of science as culminating in theoria i.e. contemplation? The pragmatist school of the first half of the 20th century did issue similar claims (e.g. cf. Hickman 1992): we will ask whether it anticipates and prepares the emergence of the computational paradigm. If so, the pragmatist epistemology, which is today coming back in force, could be seen as bridging Ryle’s know-how with computational science’s procedural stance. We will aim to show that the concept of ‘method’ is fundamental in this shift, morphing into related notions such as ‘procedure’ or ‘algorithm’, progressively extending the middle ground between theory and practice, and eventually conquering the central stage in epistemology. Herbert Simon, who put ‘design’ at the center of this new conception of science and technology, will appear as a pivotal figure in this debate. As a conclusion, we will advance some hypotheses about the limits of such a conception of science and theory.
References
Anderson, Chris. 2008. The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. Wired Magazine 16.07.
Anderson, John R. 2007. How Can the Human Mind Occur in the Physical Universe? Oxford ; New York: Oxford University Press.
Calude, Cristian S., and Giuseppe Longo. 2017. ‘The Deluge of Spurious Correlations in Big Data’. Foundations of Science 22 (3): 595–612.
Devitt, Michael. 2011. ‘Methodology and the Nature of Knowing-How’. The Journal of Philosophy 108 (4): 205–18.
Dretske, Fred I. (1981) 1999. Knowledge and the Flow of Information. David Hume Series. Stanford, CA: CSLI Publications.
Fantl, Jeremy. 2017. ‘Knowledge How’. In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Fall 2017. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/ archives/fall2017/entries/knowledge-how/.
Hickman, Larry A. 1992. John Dewey’s Pragmatic Technology. Bloomington: Indiana Univ. Press.
Humphreys, Paul. (2004) 2007. Extending Ourselves: Computational Science, Empiricism, and Scientific Method. New Ed. Oxford ; New York: Oxford University Press.
Simon, Herbert A. 1984. ‘Expert and Novice Problem Solving in Science and Mathematics’. In . Université Paul Valéry, Montpellier, France.
---------. 1978. ‘Rationality as Process and as Product of Thought’. The American Economic Review 68 (2): 1–16.
———. (1969) 2008. The Sciences of the Artificial. 3. ed.. Cambridge, Mass.: MIT Press.
Winograd, Terry. 1975. ‘Frame Representations and the Declarative/Procedural Controversy’. In Representation and Understanding, edited by D. G. Bobrow and A. Collins, 185–210. San Diego: Morgan Kaufmann.
15:45
Sjoerd Zwart (University of Technology Delft, Netherlands)
ABSTRACT. A substantial part of the knowledge developed by engineers in commercial laboratories and universities of technology consists of means-ends knowledge (MEK), or (intersubjective) know-how. This knowledge concerns the question which engineering interventions to apply to achieve a pre-specified technical goal within a concrete well-identified engineering context. Engineers need and produce a lot of it. For instance, Nancy Nersessian and Christopher Patton recognize know-how as one the three types of outputs during their ethnographic investigation of biomedical laboratories, besides artifacts and descriptive knowledge (DK) (2009: 728/730). In the rest of their chapter, however, the relation between these three remains somewhat underdeveloped. For methodological purposes, I take Wimsatt’s (2007) re-engineer philosophy perspective, and propose to put engineering MEK as an irreducible and equally important candidate of an engineering project beside DK. Besides their differences in goals, MEK and DK have many incompatible characteristics. To mention a view: the first is value-laden on object-level, intentional, valued for its context dependency and defies truth-values whereas none of these apply to DK. The differences in characteristics raises the question how, in contrast to DK, MEK is validated. In this paper, I show that engineers often refer to models to validate MEK, besides applying f scientific knowledge and carrying out practical experiments. To illustrate this claim, I discuss the way in which William Froude prescribed how to use scale models to predict the resistance of a hitherto non-existing ship hull. It turns out that Nersessian and Patton’s technical term of interlocking models is of great help to disentangle the intricate question of model-based MEK validation. Moreover, not only models are indispensable for the development and validation of MEK, but also MEK plays a crucial role in the development of models in an engineering project. As such the identification of MEK in engineering practices provides extra insights into the complicated processes of Model-based reasoning as well.
Nersessian, N. J., & Patton, C. (2009). Model-based reasoning in interdisciplinary engineering. In A. Meijers (Ed.), Handbook of the philosophy of technology and engineering sciences (pp. 687–718). Amsterdam, Netherlands: Elsevier.
Wimsatt, W. C. (2007). Re-engineering philosophy for limited beings: Piecewise approximations to reality. Harvard University Press.
Dunja Šešelja (Ludwig Maximilian University of Munich, Germany)
Understanding scientific inquiry via agent-based modeling
ABSTRACT. Computational modeling has in recent years become an increasingly popular method in philosophy of science and social epistemology. In this talk I will discuss the role of simulations of scientific inquiry in the form of agent-based models (ABMs), which are at the heart of this trend. I will start by arguing that a primary function of ABMs of scientific inquiry developed in philosophy of science---in contrast to ABMs in empirical sciences---is to contribute to our understanding of the process of inquiry and factors that may have an effect on it. In view of this, I will defend two specific ways in which ABMs can increase our understanding of science: first, by providing novel insights into socio-epistemic factors that may have a significant impact on the process of inquiry, and second, by providing evidence for or against previously proposed explanations of concrete historical episodes. I will illustrate each of these functions by a set of ABMs, which my collaborators and I have developed. While these models are abstract and highly-idealized, I will show how the results obtained from them can be analyzed in terms of their robustness and empirical validity.
Can neuropsychoanalysis save the life of psychoanalysis?
ABSTRACT. Neuropsychoanalysis is a new school of thought attempting to bridge neuroscience and psychoanalysis. Its aim is to correlate the psychodynamics of the mind with neurodynamic processes (Fotopoulou, Pfaff & Conway, 2012; Solms & Turnbull, 2002 & 2011). Neuropsychoanalysis suggests that psychoanalysis should be related to neuroscience such that psychoanalytical hypotheses can be put to rigorous test and be confirmed, modified, or falsified. In addition, adherents of neuropsychoanalysis think that the psychoanalytic perspective, with its detailed studies of human experience and subjectivity, can contribute to our knowledge about mind and behavior. However, I will present two problems that indicate that neuropsychoanalysis in its present state is not able to save the life of psychoanalysis. The focus is on Freudian psychoanalysis, but the argument may also apply to other psychodynamic theories.
First, by focusing exclusively on the relation between psychoanalysis and neuroscience, neuropsychoanalysis ignores the psychological level, that is, psychological theories and hypotheses that may be better able than psychoanalysis to explain certain phenomena. In fact, most of the explanations that compete with the psychoanalytic ones come from psychology and not neuroscience. Thus, in order to confirm psychoanalytic hypotheses it is not sufficient to show that they are consistent with neuroscience; they must also be at least as corroborated as the competing psychological hypotheses. I illustrate this point by referring to research on the defense mechanism projection. A tentative conclusion is that psychoanalytic constructs may be superfluous, because it is often sufficient to take into consideration only highly corroborated psychological theories in order to explain the phenomena referred to by psychoanalysis.
Second, neuropsychoanalysis has so far not been very successful in generating new knowledge about mind and behavior. It has mostly been concerned with correlating the psychodynamics of mind, described in psychoanalytic terms, with brain processes. These correlation may show that some Freudian ideas are similar to ideas in neuroscience, but this is not the same as generating new knowledge about mind and behavior. Even though some well-respected scientists, such as Jaak Panksepp, Eric Kandel, Joseph LeDoux, and Antonio Damasio have shown some interest in psychoanalysis and neuropsychoanalysis, they (in common with most scientists in psychology, neuroscience, and psychiatry) very seldom refer to psychoanalytic ideas in their publications. They show a remarkable caution in their display of support and prefer to use concepts prevalent in cognitive neuroscience instead of psychoanalytic concepts (Ramus, 2013). Anyway, the burden of proof is on the psychoanalysts to show that psychoanalysis can contribute to our growing knowledge about mind and behavior, but then they need to address the challenges mentioned above.
References:
Fotopoulou, A., Pfaff, D. & Conway, M.A. (2016). From the Couch to the Lab: Trends in Psychodynamic Neuroscience. Oxford: Oxford University Press.
Ramus, F. (2013). What’s the point of neuropsychoanalysis? The British Journal of
Psychiatry, 203: 170-171.
Solms, M. & Turnbull, O. (2002). The Brain and the Inner World. London: Karnac.
Solms, M. & Turnbull, O. (2011). What is neuropsychoanalysis? Neuropsychoanalysis, 13: 133-145.
Problematic interdisciplinarity of the Cognitive Science
ABSTRACT. In spite of the implicit agreement that cognitive science is by default interdisciplinary research project (as introductory chapters in any textbooks or companions on Cognitive Science attest), some authors submit that this interdisciplinarity [ITD] is not obvious at all, and we should consider whether there is any (Cohen-Cole, 2007; Graff, 2015). They argue, mainly based on historical data, that cognitive science tends to more and more divergence between disciplines, which are becoming more and more autonomous and self-sufficient (Graff, 2015). Is CS really turning away from its interdisciplinary beginnings or maybe it has never been fully interdisciplinary?
Starting from the distinction between object-, problem-, method- and theory-oriented ITD (Schmidt, 2011), I will focus on the tension between object- and problem-oriented ITD in Cognitive Science. More specifically, I will argue that: (1) interdisciplinary research is mostly problem-driven – there are INT-forcing problems, essential for lively interdisciplinary research; (2) although the object-oriented INT in Cognitive Science seems noncontroversial, most grand problems of Cognitive Science (the nature of mind or cognition) are not a INT-forcing problems, but there is a lot of smaller interdisciplinary problem forced by the issues related by integration between particular disciplines of Cognitive Science (e.g.: stabilization of theoretical constructs between psychology and neuroscience (Sullivan, 2017)). Therefore, following the classical work of Campbell (Campbell, 2014) and contemporary rich research on interdisciplinarity (e.g.: Grüne-Yanoff, 2016; Huutoniemi, Klein, Bruun, & Hukkinen, 2010; Klein, 2010; Koskinen & Mäki, 2016; MacLeod, 2016), I will argue that cognitive science strives to fish-scale model of plurality of interdisciplines, which is in accordance with contemporary pluralistic stance in Cognitive Science (Dale, 2008; Dale, Dietrich, & Chemero, 2009).
References
Campbell, D. T. (2014). Ethnocentrism of Disciplines and the Fish-Scale Model of Omniscience1. Interdisciplinary Collaboration: An Emerging Cognitive Science, 3.
Cohen-Cole, J. (2007). Instituting the science of mind: intellectual economies and disciplinary exchange at Harvard’s Center for Cognitive Studies. The British Journal for the History of Science, 40(4), 567–597.
Dale, R. (2008). The possibility of a pluralist cognitive science. Journal of Experimental and Theoretical Artificial Intelligence, 20(3), 155–179.
Dale, R., Dietrich, E., & Chemero, A. (2009). Explanatory pluralism in cognitive science. Cognitive Science, 33(5), 739–742.
Graff, H. J. (2015). Undisciplining knowledge: Interdisciplinarity in the twentieth century. JHU Press.
Grüne-Yanoff, T. (2016). Interdisciplinary success without integration. European Journal for Philosophy of Science, 6(3), 343–360.
Huutoniemi, K., Klein, J. T., Bruun, H., & Hukkinen, J. (2010). Analyzing interdisciplinarity: Typology and indicators. Research Policy, 39(1), 79–88.
Klein, J. T. (2010). A taxonomy of interdisciplinarity. The Oxford Handbook of Interdisciplinarity, 15, 15–30.
Koskinen, I., & Mäki, U. (2016). Extra-academic transdisciplinarity and scientific pluralism: what might they learn from one another? European Journal for Philosophy of Science, 6(3), 419–444.
MacLeod, M. (2016). What makes interdisciplinarity difficult? Some consequences of domain specificity in interdisciplinary practice. Synthese, 1–24.
Schmidt, J. C. (2011). What is a problem?: on problem-oriented interdisciplinarity. Poiesis & Praxis, 7(4), 249.
Sullivan, J. A. (2017). Coordinated pluralism as a means to facilitate integrative taxonomies of cognition. Philosophical Explorations, 20(2), 129–145.
Tereza Křepelová (Masaryk University, Faculty of Social Studies, Czechia)
Positivisation of Political Philosophy and Its Impact on The Whole Discipline
ABSTRACT. Abstract:
For the past decade, we have witnessed the outburst of discussions concerning the methods of political philosophy (mostly its prevailing analytical branch) resulting in the emergence of the first propaedeutic literature systematizing and summarizing existing methodological approaches and frameworks (see Blau 2018, List and Valentini 2016, Leopold and Stears 2008). To understand the cause and the possible impact of these discussions, we need to perceive them in the broader context given by the problematic position of political philosophy within the positivist-oriented political science. The paper presented aims to outline such context and explain how similar tendencies – such as positivisation of political philosophy - might lead to the limitation of its epistemological scope and critical capacity.
One of the features that political philosophy incorporated from the positivist-oriented science is the urge for validation of its normative outcomes. This fact has led to the development of the frameworks and methods aiming at the assessment of external validity of the normative theories that is linked to the positivist presumption of epistemological objectivism. One of the methods developed for these purposes was the reflective equilibrium that is nowadays being considered as the most widely used method in the moral and political philosophy (Varner 2012: 11 in Knight 2017: 46, Rawls 1999, 15–18, 40–46; Daniels 2013 in List, Valentini 2016: 17). However, the reflective equilibrium has been widely criticized for its effects, such as the avoidance of controversial questions that are mostly linked to the metaphysics, meta-ethics, and religion (Norman 1998: 284), as well as its interconnection to the liberal paradigm (especially the Rawlsian strand). The paper presented will outline how reflective equilibrium produces such normative outcomes and how (unintentionally) limits the critical capacity of the political philosophy since it serves as an affirmative framework that presupposes its own conclusions rather than a framework that would provide an objective assessment of various normative theories and principles. Finally, the paper will critically discuss the possibility of the objective assessment of normative theories regarding the fact that the subject of the biggest reasonable disagreement in the society and political philosophy is the question of objectivity itself (see e. g. Nussbaum 2001: 890).
Bibliography
• BLAU, Adrian. Methods in Analytical Political Theory. Cambridge University Press. 2018.
• DANIELS, Norman, "Reflective Equilibrium", The Stanford Encyclopedia of Philosophy
(Spring 2018 Edition), Edward N. Zalta (ed.), URL =
.
• KNIGHT, Karl. “Reflective Equilibrium”. IN: BLAU, Adrian. Methods in Analytical Po-litical Theory. Cambridge University Press. 2018.
• LEOPOLD, David et STEARS Marc. Political theory: methods and approaches. Oxford: Oxford University Press, 2008.
• LIST, Christian et VALENTINI, Laura. „Methodology of political theory. “In: CAPPELEN, Herman, Tamar GENDLER a John HAWTHORNE. The Oxford handbook of philosophical methodology. Oxford: Oxford university press, 2016.
• NORMAN, Wayne. „Inevitable and Unacceptable?' Methodological Rawlsianism in Anglo-American Political Philosophy “. Political Studies. XLVI, 27. 294. 1998.
• NUSSBAUM, Martha C. “Political Objectivity”. New Literary History. 32: 883–906. 2001.
15:45
Zhicong Shang (University of Chinese Academy of Sciences; China Society for Dialectic of Nature (Philosohy of Science & Techmology), China)
The Competition of Interests in Public Scientific Knowledge Production------An Analysis of Chinese Case
ABSTRACT. Public scientific knowledge refers to the kind of scientific knowledge that is used as the basis of fact judgement in public policy-making, typically characterized by scientific rationality, economic sharing and political legitimacy. This kind of knowledge originates from the scientific knowledge produced by the scientific community, but get formed and reproduced in the public policy-making process through the negotiation of all participants. The production of public scientific knowledge and public policy-making is the identical process. Through the analysis of public scientific knowledge production in China’s safety assessment decision on GM Crops since the 1990s, this paper discovers that the competition of different interests is inevitable for all participants. Among all the interests competition, competing for material interests is an essential one, such as competing for research funds, high-value crops, and so on; while discourse competition are more crucial, participants striving to manipulate public policy-making through the control of the safety discourse on GM Crops. The competition has the following four characteristics: 1. all participants strongly support the principle of democracy. 2. confined and limited by specific state systems, not all interests-related parties can become participants. 3. participants sometimes practice non-normative ways and means of participation. 4. government officials have the overwhelming authority, but they sometimes have confusions on choosing whose interests to represent for, causing constant disputes or even disruption of negotiation. Therefore, this paper comes to the conclusion that there exists competition of interests in the public knowledge production process and the key for solving the competition lies on choosing appropriate participants under different state systems.
Reference:
Sheila Jasanoff, Designs on Nature. Princeton, NJ: Princeton University Press, 2005.
Sheila Jasanoff. Science, Expertise and Public Reason. New York: Routledge, 2012.
Jr. Duncan MacRae, Dale Whittington. Expert Advice for Policy Choice: Analysis and Discourse.Washington, DC: Georgetown University Press, 1997.
Michael Gibbons, Camille Limoges, Helga Nowotny, Simon Schwartzman, Peter Scott and Martin Trow ed.. The new production of knowledge: The dynamics of science and research in contemporary societies. Sage, 2010.
YANG Hui,SHANG Zhicong. Public Knowledge Production in Science & Technology Decision Making. Studies in Dialecties of Nature 2014(9):64-70.
Count-as Conditionals, Background Conditions and Hierarchy of Constitutive Rules
ABSTRACT. Searle’s theory of institutional facts seems to be of vital importance in understanding social reality. According to his summary, the following “rather simple set of equivalences and logical implications” hold (2010, p.23):
institutional facts = status function → deontic powers → desire-independent reason for action.
Status functions are functions people, things, processes, and so on have, not in virtue of their “sheer physical features” but in virtue of certain status they are recognized to have, and deontic powers are things like “rights, duties, obligations, requirements, permissions, authorizations, entitlements, and so on” (2010, p. 9). In simple cases, institutional facts are generated in accordance to the constitutive rules of the following form (Searle 1969, pp. 51-52, 1995, pp. 28, 41-51):
X counts as Y in context C,
where the term Y stands for the relevant status which the status function accompanies.
Recently several attempts to capture the logic of count-as conditionals have been made in the deontic logic literature. For example, Jones and Sergot (1996) includes the following principle as one of the logical principles for the logic of count-as (Jones and Sergot, 1996, p. 436).
(A ⇒s B ) → ((B ⇒s C ) → (A ⇒s C )),
where the expression ``⇒s’’ is used to represent the special kind of conditional such that the sentence ``A ⇒s B ’’ intuitively means that A counts as B in the institution s.
The importance of having such a logic is of course clear, but if we are to analyze the logic of social reality, it seems that we need to go further for at least two reasons. First, even if an entity of type X counts as Y in context C, another entity of type X may fail to do so in other contexts. Thus, we need to be able to talk about background conditions that characterize C. Second, constitutive rules can be hierarchically structured, and so an entity e of type X which counts as Y in a context c of type C may further count as Z in that context if any entity of type Y counts as Z in context D and c is of type D as well. The Purpose of this paper is to show how the logic of such phenomena can be captured in Channel Theory developed in Barwise and Seligman (1997).
References
Barwise, J. & Seligman, J., Information Flow: The Logic of distributed Systems, Cambridge University Press, 1997.
Jones, A. J. I. & Sergot, M., ``A Formal characterization of Institutionalised Power’’, Journal of the IGPL, Vol. 4 No. 3, pp. 427-443, 1996.
Searle, J. R., Speech Acts: An Essay in the Philosophy of Language, Cambridge University Press, 1969.
Searle, J. R., The Construction of Social Reality, The Free Press, 1995.
Searle, J. R., Making the Social World: The Structure of Human Civilization, Oxford University Press, 2010.
Renzong Qiu (Institute of Philosophy, Chinese Academy of Social Sciences, China) Ruipeng Lei (Center for Bioethics, Huazhong Universty of Science and Technology, China)
Intergenerational Justice Issues in Germline Genome Editing
ABSTRACT. The birth of the twins whose embryo was subject to genome editing has shocked the participants at the Second International Summit Meeting on Genome Editing as well as the world. The Chinese scientist Dr. Jiankuai He was blamed for violating Chinese relevant regulations, international well accepted ethical norms as well as for irresponsible intervention which will bring about great health harms to the twins and their offspring, or future generations due to his invalid research design including they may be at greater than average risk of dying of some other infections, including flu by disabling their CCR5 gene. This constitutes an obvious example in which an action taken by present generation may directly cause harms to future generations. Some philosophers deny that the possibility of future people having rights vis-à-vis us based on the fact that future people will live in the future, our epistemic situation does not allow us to relate to future people as identifiable individuals, and they cannot claim these rights against them, i.e. cannot impose sanctions on currently living people for non-fulfillment of their corresponding duties. The article will argue that when we edit the embryo’s gnome of the parent with genetic condition the future people are definitely be bearers of rights, that the rights they have will be determined by the interests (health, wellbeing) they have then, and that our present actions and policies can affect their interests. If we can so severely frustrate such interests of future people, and commit to violate the intergenerational justice, we can violate their future rights. The twins, Lulu and Nana are genetically identifiable people living now, but their offspring will be non-identifiable future people who may be at greater than average risk of dying of some other infections as the twins, our obligations to these non-identifiable future people are no less than to the twins because they are human beings too that require us to relate morally to them as fellow humans. Based on this ethical inquiry we suggest that the provision on safeguarding the rights of future people and adhering intergenerational justice should be included in the regulations on human genome editing.
15:45
Xiaoju Dong (Department of the History of Science, Tsinghua University., China)
CANCELLED: How Should We Treat Human Enhancement Technology: Acceptance or Rejection?
ABSTRACT. Abstract: In recent years, the idea of human enhancement exits no longer just in science fictions and fictional films, but is actually supported by the related technology. In the philosophical research about the human enhancement technology, the related views can be summarized into Conservatism, Radicalism and Eclecticism. Fundamentally speaking, the reason for the disagreement among these three different viewpoints is that they hold different positions regarding to our own existence as "human being" and the issue of personal identity. Can an enhanced person be called a "human "? Can I" still be the original "I" after enhancement? In consideration of such questions, different views will inevitably lead to different answers on the basis of their own positions. Based on a detailed analysis of existing viewpoints about these kinds of technology, combining with the narrative theory and the embodiment theory, this paper attempts to elucidate why the personal identity is crucial for human beings, how the human enhancement technology will have a negative impact on the personal identity and further endanger the intrinsic nature of human. Therefore, as many scholars suggested, in front of the development of human enhancement technology, we should hold a prudential and critical attitude.
References
Allen E. Buchanan. 2011. Beyond Humanity: The Ethics of Biomedical Enhancement. New York: Oxford University Press.
Carl Elliott. 2004. “Enhancement technologies and identity ethics”. Society. 41 (5) :25-31.
David DeGrazia. 2000. “Prozac, Enhancement, and Self-Creation”. The Hasting Center Report. 30(2): 34-40.
David DeGrazia. 2005. “Enhancement Technologies and Human Identity”. Journal of Medicine & Philosophy. 30 (3) :261-283.
David DeGrazia. 2005. Human Identity and Bioethics. London: Cambridge University Press.
David DeGrazia. 2016. “Ethical Reflections on Genetic Enhancement with the Aim of Enlarging Altruism”. Health Care Analysis. 24 (3) :180-195.
Don Ihde. 1990. Technology and the Lifeworld. Indiana University Press.
Don Ihde. 1993. Postphenomenology: Essays in the Postmodern Context. Illinois: Northwestern University Press.
Don Ihde. 2012. “Postphenomenological Re-embodiment.” Foundations of Science. 2012(17):373-377.
Francis Fukuyama. 2003. Our Posthuman Future: Consequences of the Biotechnology Revolution. New York: Farrar, Straus and Giroux.
Janet A. Kourany. 2014. “Human Enhancement: Making the Debate More Productive”. Erkenntnis. 79 (5) :981-998.
John Harris. 2007. Enhancing Evolution: The Ethical Case for Making Better People. Princeton and Oxford: Princeton University Press.
Julian Savulescu. Nick Bostrom. 2009. Human Enhancement. New York: Oxford University Press.
Julian Savulescu. Ruud ter Meulen. Guy Kahane. eds. 2011. Enhancing Human Capacities. Wiley-Blackwell.
Jürgen Habermas. 2003. The Future of Human Nature. Polity Press.
Laura Y. Cabrera. 2015. Rethinking Human Enhancement: Social Enhancement and Emergent Technologies. London: Palgrave Macmillan.
Marya Schechtman. 1996. The Constitution of Selves. New York: Cornell University Press.
Marya Schechtman. 2004. “Personality and Persistence: The Many Faces of Personal Survival”. American Philosophical Quarterly. 41(2): 87-105.
Marya Schechtman. 2007. “Stories, Lives, and Basic Survival: A Refinement and Defense of the Narrative View”. Royal Institute of Philosophy Supplement. 60 :155-178.
Marya Schechtman. 2008. “Diversity in Unity: Practical Unity and Personal Boundaries”. Synthese. 162: 405-423.
Marya Schechtman. 2014. Staying Alive: Personal Identity, Practical Concerns, and the Unity of a Life. New York: Oxford University Press.
Max More. Natasha Vita-More. 2013. The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future. Wiley-Blackwell.
Miriam Eilers. Katrin Grüber. Christoph Rehmann-Sutter. eds. 2014. The Human Enhancement Debate and Disability: New Bodies for a Better Life. London: Palgrave Macmillan.
Nicholas Agar. 2010. Humanity's end: Why We Should Reject Radical Enhancement. MA: MIT Press.
Nicholas Agar. 2013. “Why it is possible to enhance and why doing so is wrong”. Journal of Medical Ethics. 39 (2) :67.
Nick Bostrom. 2003. “Human Genetic Enhancements: A Transhumanist Perspective”. Journal of Value Inquiry. 37 (4) :493-506.
Nick Bostrom. 2005. “A History of Transhumanist Thought”. Journal of Evolution & Technology. pp1-30.
Nick Bostrom. Anders Sandberg. 2009. “Cognitive Enhancement: Methods, Ethics, Regulatory Challenges.” Science and Engineering Ethics. 2009(15): 311-341.
Norah Campbell. Aidan O’Driscoll. Michael Saren. 2010. “The Posthuman: the End and the Beginning of the Human”. Journal of Consumer Behaviour. 9 (2) :86–101.
Pete Moore. 2008. Enhancing Me: The Hope and the Hype of Human Enhancement. Wiley.
Ray Kurzweil. 2006. “Reinventing Humanity The Future of Human-Machine Intelligence”. Futurist. 40 (2) :39-40,42-46.
Robert Rosenberger. Don Ihde. Peter-Paul Verbeek. eds. 2015. Postphenomenology and the Philosophy of Technology. Lexington Books.
Simone Bateman. Jean Gayon. Michela Marzano. eds. 2015. Inquiring into Human Enhancement: Interdisciplinary and International Perspectives. London: Palgrave Macmillan.
Stephen Lilley. 2013. Transhumanism and Society: The Social Debate over Human Enhancement. Springer.
Tamar Sharon. 2014. Human Nature in an Age of Biotechnology: The Case for Mediated Posthumanism. Springer.
ABSTRACT. The scientific realism debate has developed into a large-scale war, and in this paper, we first distinguish three lines of battle in order to situate our own contribution to the debate. (1) Within the realist camp, deployment realists, structural realists, and entity realists debate with one another regarding which parts of theories (working posits, structures, or posited entities) successfully represent unobservables. This is the ontological aspect of the debate. (2) Wholesalists/globalists and retailists/localists argue about whether the realism debate ought to be settled by arguments regarding our best theories in general (wholesale arguments) or by arguments regarding particular theories, claims, and/or entities (retail arguments). This is the methodological aspect of the debate. (3) Realists and anti-realists disagree about whether or not there is a criterion of reality according to which we can demonstrate that some parts of theories (usually the ‘successful’ parts, according to some notion of success) actually represent an unobservable reality. Realists argue that there is such a criterion while anti-realists argue that there is not. This is the epistemological aspect of the debate.
Methodologically, we adopt a retailist approach to the debate. Ontologically, we see the disputes among those in the realist camp as tracing back to a shared commitment to monism about criteria of reality paired with disagreement regarding which proposed criterion ought to serve as the single criterion of reality. And epistemologically, we adopt a pluralist view regarding criteria of reality. Our defense of this view consists of developing a theoretical framework that combines retailism with pluralism about criteria of reality, and illustrating that framework by showing how it applies to cases from the history of science.
Regarding the framework, we argue that a commitment to retailism leads naturally to a pluralist view of criteria of reality. Methodologically, retailists restrict their arguments to particular theories, claims, and/or entities. Their refusal to generalize the conclusions of such arguments to similar cases makes sense only if, epistemologically, retailists are open to the possibility that a single criterion of reality is not applicable to all cases. After all, if one and the same criterion is applicable to all cases, retailists would have a direct route to generalizing their conclusions to similar cases, which would make it difficult for them to maintain that such generalization is unwarranted.
Regarding the application of the framework, we consider three historical cases that are often discussed in the context of the realism debate: J. J. Thomson’s work on corpuscles and cathode rays, Jean Perrin’s work on atoms and Brownian motion, and Robert Millikan’s measurement of the charge of the electron. We consider various criteria of reality (for example, manipulation, experimental individuation, and measurement). We determine which criteria are applicable in each of the three cases. And we draw some conclusions regarding whether the work in question provides a sufficient reason for adopting a realist attitude towards the entities involved in these cases.
Why epistemic pluralism does not entail relativism
ABSTRACT. There is a widespread view according to which the denial that the conditions of knowledge are truth-evaluable inevitably leads to a form of epistemic pluralism that is both quietist and internally incoherent. It is quietist because it undermines the possibility of genuine epistemic disagreement. It is internally incoherent because it simultaneously denies the existence of universal knowledge claims and makes the universal claim that there is no such knowledge. The goal of this paper is to show that denying that the conditions of knowledge are truth-evaluable does not necessarily entail a commitment to a form of epistemic relativism that is both quietist and internally incoherent.
The paper begins, in section I, by considering Boghossian’s characterization of epistemic pluralism in Fear of Knowledge. (Boghossian 2006) According to Boghossian, the descriptive claim that there are different belief systems, combined with the denial that the conditions of knowledge are truth-evaluable, leads to a questionable form of epistemic pluralism, one which is relativist in outlook. I consider Boghossian’s account of epistemic pluralism because it seems to capture the widespread view that pluralism and relativism must go hand in hand if one denies that the conditions of knowledge have truth-values. Section II outlines a form of epistemic pluralism that, I argue, is unfairly described as relativistic. This form of non-relativistic pluralism arises not in response to the descriptive claim that there is a plurality of belief systems, but to the normative claim that explanation should be fit for purpose. Once pluralism is conceived in this light it no longer has the quietist overtones that tend to be characteristic of epistemic relativism. The distinction between the relativistic pluralism that is the target of Boghossian’s critique and non-relativistic pluralism is illustrated through a reconstruction of Collingwood’s account of absolute presuppositions in An Essay on Metaphysics, a highly neglected but important contribution to hinge epistemology. This reconstruction is offered as an illustration of how hinge epistemology could be construed to avoid two standard objections that are often raised against epistemic relativism, namely that it makes it impossible to criticise other cultures and that it is self-undermining. Section III considers these two standard objections and argues that they do not apply to the kind of epistemic pluralism that arises from the consideration that explanation must be fit for purpose and sensitive to the goals of inquiry.
The paper concludes by arguing that the decoupling of epistemic pluralism from epistemic relativism argued for in this paper demonstrates that invoking the threat of postmodern relativism to bolster the correspondence theory of truth amounts to a form of philosophical scaremongering.
Guillermo Samuel Tovar-Sánchez (Centro de Investigaciones Económicas, Administrativas y Sociales del Instituto Politécnico Nacional de México, Mexico) Luis Mauricio Rodríguez-Salazar (Centro de Investigaciones Económicas, Administrativas y Sociales del Instituto Politécnico Nacional de México, Mexico)
From Subject natural logic to scientist logic in natural science: an epistemological reflexion
ABSTRACT. Traditionally, epistemology been conceptualized as branch of philosophy as philosopher reflexion about scientific work for the construction of philosophical systems where subject material and symbolic actions are totally different to formal actions. Nevertheless, Piaget dedicated his life to analyze the subject natural logic since psychogenetic perspective. He argue that subject natural logic and logic of scientist in natural sciences is a matter of fact, while subject whom research about logic science, formalization is a matter of deduction with stablish rules. Then, on the one hand, the subject natural logic and logic of the scientist in natural sciences implies norms that address their actions according to the facts of natural world configuring structural representations coordinating the outcomes of their actions on the natural world constructing is own rules. On the other hand, logics rules are deduced independently from the subject actions on natural world. However, the natural scientists are also subjects that structures their reality based on their own psychogenetic natural logic before they becoming scientists.
If we consider, as Piaget does, that "essences" or intuitions are inseparable from the facts, then it can be argued that there is a close link between form and fact. On the other hand, at this research it is argued that imagination is a rational explanatory resource that mediates between intuitions and concepts. Consequently, the symbolic component plays an important role as a mediator between the fact and its formalization that culminates with the configuration of action rules.
In that manner, the problem is based: how the link of factual issues is established to deductive issues of the subject's norms. Therefore, the objective of this paper is to analyze the normative and symbolic logical components of the fact-deduction and its implications in the experimental-formalization link through psychogenetic method.
To achieve the objective, the present work is based on the exposition of the differences between meta-scientific, para-scientific and scientific epistemologies, which will establish the theoretical concepts of the epistemology of the imagination to describe how the subject-object relationship is understood. Subsequently the discussion on the logic of the subject is opened from a genetic approach to finally propose an epistemological framework on the link between the experimental and deductive components in the construction of scientific knowledge.
A Bridge for Reasoning: logical consequence as normative
ABSTRACT. How do we reason? How should we reason? Intuitively, the question of how we conduct our everyday inferential practice is central to living in a certain community, as well as being central within the philosophy of logic.
In this essay, I am going to argue that the form of logical consequence accepted by a community dictates how an agent in that community should reason, by imposing a normative constraint on the agent’s reasoning. I will also argue that this normative constraint is an important part of the nature of logical consequence. I will take reasoning, after Harman, to be defined as ‘reasoned change in view ’. I will also be using an interpretationalist, model-theoretic account of consequence.
I will claim that the exact normative relation between a form of logical consequence and reasoning can be specified by a bridge principle. A bridge principle is a conditional which links a form of logical consequence to a normative claim about reasoning. For instance, MacFarlane proposes the following as possibilities for a bridge principle:
(Cr+) If A,B⊨C, then if you believe A and you believe B then you have reason to believe C
(Co+) If A,B⊨C, then if you believe A and you believe B then you ought to believe C
I will propose an adaptation to one of Macfarlane’s bridge principles as being the most plausible candidate for describing the normative relation between logical consequence and reasoning and assess how it fares in response to certain objections and paradoxes. This adapted bridge principle will be called (Co+d):
(Co+d) If A,B⊨C [is accepted by your reasoning community], then if you believe A and you believe B, you ought to have a disposition to believe C
The structure of the paper is as follows. First, in (S1) I outline what it is to say that logical consequence is normative and how this works within an interpretationalist account of model-theoretic consequence. I put forward what I consider to be MacFarlane’s two most promising bridge principles, Cr+ and Co+. Then in (S2) I will raise three definitive objections to these bridge principles. In (S3) as a response to these objections, I propose another more plausible bridge principle, Co+d, and assess how it responds to these objections. Lastly, in (S4) I further assess the bridge principle in relation to the paradox of the preface and the issue of priority through comparison to the bridge principle that MacFarlane considers to be the most successful (FBP).
I conclude that the normativity of logical consequence can be demonstrated through the use of an appropriate bridge principle, linking logical consequence to reasoning and that the most plausible candidate for this bridge principle is Co+d.
References:
Gilbert Harman, ‘A Plea for the Study of Reasoning’ in Harman, Change in View (Paltino: MIT Press, 1986) p.3
MacFarlane, John, ‘In What sense (If Any) is Logic Normative for Thought?’, (Draft: April 21, 2004)
ABSTRACT. The present paper is an attempt to undertake a critical estimation of several issues connected with a criterion of validity for imperative inferences. Imperatives are neither true nor false. But they have a value analogous to indicatives, namely, compliance-value or. Josh Parsons raises the issue of content-validity as a criterion of imperative inference. An argument is content-valid iff the contents of the premises jointly entail the content of the conclusion. This criterion has been illustrated in terms of possible worlds (Parsons has introduced the term ‘preposcription’ here). This theory however, ignites certain problems which require discussion. Following are some of the reflections.
1. According to Parsons, indicatives and imperatives both have propositions as their contents. But according to Speech Act theory it is not the ‘proposition’, but the ‘propositional act’ (reference + predication) which is the common element that an indicative and imperative may possibly share. An imperative sentence is better understood in connection with some action.
2.The insufficiency of standard view in respect to content-validity has been elaborately shown by Parsons. Some the ‘evil twins’ of some content-valid arguments are found to be intuitively invalid.
Example. 1
(A1) Attack at dawn if the weather is fine!
(A2) The weather is fine.
Therefore, (A3)Attack at dawn!
Example. 2 [Evil Twin]
(B1)Attack at dawn if the weather is fine!
(B2) Let the weather be fine!
Therefore, (B3) You attack at dawn.
Symbolic form of example 1 and 2 :
Argument A: I(p→q)
A(p)
Therefore,I(q)
Argument B: I(p→q)
A(p)
Therefore, I(q)
The symbolic form of example 2 is a matter of worry which is evident from the following:
The weather is fine (antecedent of the conditional proposition)
Let the weather is fine ( an independent proposition which is not the
antecedent)
3. The concept of imperassertion includes the notion of belief and intention. Possible worlds are thought to be linked with a person’s belief-intention-set. But no inconsistency can be presupposed , i.e., no question of considering the imperassertion and belief of the person making a command along with the hearer’s lacking the intention to carry out the command. It is to be noted that in case of argument having one conditional imperative proposition as a premise, the “success” of an imperassertion is only to be presupposed as true.
The upshot of the whole discussion leads to a serious observation. In case of a sentence “Attack at dawn if the weather is fine” only compliance and non-compliance are not enough to project all possible worlds. It requires another world to exhibit the possibility of the weather being so so. It proves the inadequacy of Parson’s theory.
The concept of imperassertion as found in Parsons’ writing is praiseworthy in tying the knot among logic, theory of mind and theory of action. Perhaps it may help unearthing layers of the logic of imperative.
References
1. Parsons Josh (2013), “Command and Consequence”, Philosophical Studies 164 Springer pp-61-92.
2. Vranas, Peter B.M (2008) “New foundations for imperative logic I Logical connectives, consistency, and quantifiers”, Nous,42.
ABSTRACT. This paper focuses on Chisholm's paradox and offers to resolve some ambiguities concerning a philosophical comparative analysis using two logical frameworks in which the paradox can be formalized. In the standard system of deontic logic, Chisholm paradox is one of the most interesting and challenging paradox among all. The paradox, ever since its discovery, has been shown to affect many if not most deontic systems. Various approaches like Van Fraassen’s dyadic deontic approach, Peter L. Mott’s counterfactual conditional approach etc. were formulated to avoid inconsistency in the formal representation of Chisholm set of sentences. The aim of this paper is to focus the ambiguities lying in the above approaches and to highlight a serious problem in Judith Wagner’s attempt while resolving the paradox. Judith Wagner argued that Peter L Mott’s solution is unsatisfactory as it allows general factual detachment. Here, I shall argue that Wagner’s solution faces serious challenges when it comes to fulfill minimal adequacy conditions and shifting the operator can only temporarily fix the problem. In this paper, first I will discuss the approaches and then attempt to resolve respective ambiguities in them.
Reference
[1] Åqvist, Lennart. "Good samaritans, contrary-to-duty imperatives, and epistemic obligations." Noûs 1.4 (1967): 361-379.
[2] Chisholm, Roderick M. "Contrary-to-duty imperatives and deontic logic." Analysis 24.2 (1963): 33-36.
[3] DeCew, Judith Wagner. "Conditional obligation and counterfactuals." Journal of Philosophical Logic 10.1 (1981): 55-72.
[4] Lewis, David. "Counterfactuals. Malden." MA: Blackwell(1973).
[5] Mott, Peter L. "On Chisholm's paradox." Journal of Philosophical Logic 2.2 (1973): 197-211.
[6] Tomberlin, James E. "Contrary-to-duty imperatives and conditional obligation." Noûs (1981): 357-375.
[7] Van Fraassen, Bas C. "The logic of conditional obligation." Journal of philosophical logic 1.3-4 (1972): 417-438.
[8] Von Wright, Georg Henrik. "A new system of deontic logic." Deontic Logic: Introductory and Systematic Readings. Springer, Dordrecht, 1970. 105-120
15:45
Jiří Raclavský (Department of Philosophy, Masaryk University, Czechia)
Type Theory, Reducibility and Epistemic Paradoxes
ABSTRACT. Variants of \textit{simple type theory} (\textit{STT}) (equipped with typed $\lambda$-calculus as language), pioneered by Church 1940, are among leading logical systems employed within computer science (functional programming, proof assistants, ...; dependent or intuitionistic STTs are often used), and, being systems of higher-order logic, they are also employed for post-Montaguean formalisation of language (type-theoretic, and even proof-theoretic semantics).
Yet Church (e.g. 1976) repeatedly investigated a variant of Russell's (1908, 1910--13) \textit{ramified TT} (\textit{RTT}), which can be imagined as STT whose some types are split into proper \textit{types} (i.e. sets of entities) of different \textit{order}.
Recent RTT's include Tich\'{y} 1988, Kamareddine, Laan and Nederpelt 2010 (computer scientists).
Church describes a motivation for such an enterprise:
\begin{quotation}
If, following early Russell, we hold that the object of assertion or belief is a proposition and then impose on propositions the strong conditions of identity which it requires, while at the same time undertaking to formulate a logic that will suffice for classical mathematics, we therefore find no alternative except for ramified type theory with axioms of reducibility.
(Church 1984, p. 521)
\end{quotation}
\noindent
One assumes here a proof-theoretic approach to belief attitudes (whose objects are `structured propositions', not possible-world propositions), not the model-theoretic one (for which see e.g. the handbook of epistemic logic by van Ditmarsch, Halpern, van der Hoek, Kooi 2015). (Within Tich\'{y}'s RTT, which I use in the talk, one has model-theoretic level, so we can compute semantic values of terms and also correctness of rules; see my 2018.)
My talk contributes to this research by investigating
RTT's capability to treat belief attitudes, while epistemic paradoxes provide the test case.
Here are some known facts in the background of the talk:
\begin{itemize}
\item[a.]
\textit{RTT introduces restrictions}: ramification splits knowledge (belief, assertion ...) operator into its order variants which each has a restricted scope of applicability (e.g., $K^k$ only applies to $k$-order propositions)
\item[b.]
\textit{reducibility releases restrictions}: in its general and well defensible form, the \textit{reducibility principle} (\textit{RP}) affirms the existence of lower-order `propositional functions' that are congruent to (impredicative) higher-order ones
\end{itemize}
Thanks to the restrictions, RTT solves many epistemic and semantic paradoxes, but I show that RP brings paradoxes back (and so an additional method for their solution has to be used).
Though this fact has been known for the case of the Liar and Grelling's paradoxes (Myhill 1979, Tich\'{y} 1988, Giaretta 1998, Martino 2001), it is novel for the case of epistemic paradoxes such as Church-Fitch's knowability paradox (\textit{FP}) or the Knower paradox (\textit{KP}).
As regards FP, the `typing approach' to FP (Williamson 2001, Linsky 2008) was criticised by various fallacious arguments (see my 2018-forthcoming for analysis); yet my RP-based version of FP seems to be a genuine one.
As regards (my RTT version of) KP, the situation is more alarming, for `typing' (Anderson 1983) has been its accepted solution. However, my analysis shows that it is the Knowledge principle that should be restricted (I draw here an analogue of Myhill's and Giaretta's response to the recurrence of semantic paradoxes).
The philosophy and mathematical practice of Colin Maclaurin
ABSTRACT. Colin Maclaurin was an 18th century mathematician, who was an important scientific figure during the Scottish enlightenment. Of particular note are his contributions in calculus and physics. He promoted Newtonian physics in Scotland, and also defended Newton's calculus from Berkeley’s infamous philosophical objections (1734). Newton’s philosophy of science currently draws much historical-philosophical interest. To fill in the mathematical-scientific landscape, research has branched into figures related to Newton, such as Galileo, Boyle, Du Châtelet, Clarke, etc. As the most successful contemporary defender of Newton's calculus, Maclaurin's philosophy is an important component of this ongoing research program.
In this talk I contrast two strands found in Maclaurin’s work. On the one hand, he provides a rather sophisticated philosophical defense of calculus – one that is very similar to current structuralist philosophies of mathematics. This emerges when Maclaurin directly addresses Berkeley’s philosophical questions regarding the nature of mathematics. One question is ontological: what are fluxions and infinitesimals? Another question is methodological: given that we seem not to know what fluxions or infinitesimals are, how can their use lead to truth; in other words, why should we trust the results of calculus?
Maclaurin’s answer to Berkeley, and defense of Newton, is that it doesn’t matter what fluxions are, or indeed whether or not they exist at all. What matters in mathematics are its relations, rather than its objects. That is, Maclaurin argues that mathematics is not about objects but relations between objects. Thus, what matters in accepting calculus is that its method yields consistent, clear results not that its singular terms refer to objects that we can understand or know. In deflecting attention away from basic concepts and entities (infinitesimals, fluxions, etc.) and towards the method and consistency of calculus, Maclaurin’s position is strikingly similar to that of early “methodological structuralists" such as Dedekind, Poincaré and others.
On the other hand, Maclaurin’s mathematical work does not seem as modern as his philosophy. His notebooks, intermingle entries on pure math, applied math, physics and applied physics, suggesting a fairly casual approach to mathematical justification. Indeed, Sageng argues that Maclaurin was “Baconian”, an empiricist about mathematics, which seems in direct conflict with any “structuralist” hypothesis. One way to ease the tension between these two strands, and defend Maclaurin, would be to rely on the distinction between justification and discovery, arguing that his more concrete reasonings are limited to the context of discovery. But this is not so clearly the case. He seemed equally comfortable reasoning via equations, mathematical diagrams and physical methods, sometimes moving between the three "modes" in one problem. The different approaches thus seem to overlap in his work. There thus appears to be a tension between his pragmatic approach to mathematical practice and his structuralist philosophy regarding the calculus.
17:15
Marlena Fila (Pedagogical University of Cracow, Poland)
On continuity in Bolzano’s 1817 Rein analytischer Beweis
ABSTRACT. 1. Ancient Greek characteristic of continuity was encapsulated in the statement: continuous thing
is divisible into parts that are infinitely divisible (see [1]). It was equally applied to space, time and
motion. In modern science, space and time are represented by real numbers, assuming Rn models
space and (R,<) models time, while motion is represented by function. Accordingly, continuity
splits into continuous order (continuity of real numbers) and continuous function.
We argue that this split of the meaning of continuity started in [2] and resulted from a duality
of geometric line and function.
2. In [2], Bolzano seeks to prove Intermediate Value Theorem for polynomials (IVTp). Although
he also proves Intermediate Value Theorem for continuous function (IVTf ), it plays but a role of
lemma in the proof of IVTp.
Bolzano attempts to prove Cauchy
Completness (CC) and then derives greatest lower bound principle (GLB) from it. Leaving aside possible circularity
or mistakes, we argue that his most insightful contribution is the very formulation of GLB and
defnition of the continuity
of the function (DfCF).
3. Modern calculus adopted the following part of the Bolzano’s proof. Since IVTp easily follows from IVTf , it is no longer considered to be a crucial proposition.
However, IVTp does not depend on GLB as it holds in real closed fields.
4. In Preface, Bolzano criticizes “mechanical” and “geometrical” proofs of IVTp. The first is flawed
by its appeal to “the concept of the continuity of a function with the inclusion of the concepts of
time and motion”. Therefore he presents his DfCF. The second is circular, since it relies on the
“general truth, as a result of which every continuous function” has IVT property.
We argue that there is the taken-for-granted assumption in Bolzano’s argument that any line
can be represented by some function. Marked by that duality of line and function, [2] belongs to the
tradition (initiated by [2], developed in [4]) under which an analytic formula represents function,
diagram represents line, and the relation between function and line is guaranteed by some nonmathematical
conditions. That duality was covered by the arithmetisation of analysis, and then
completed in the set-theoretic foundations of mathematics. Under the set-theoretic definition of
function, a line is the graf of a function; when line is identified in Rn its continuity is related to
GLB. On the other hand, the continuity of function is related to DfCF and echoes its “mechanical”
provenance.
References
1. Bekker I., Aristotelis Opera, Berlin 1931.
2. Bolzano B., Rein analytischer Beweis..., Gotlieb Hasse, Praga 1817;
3. Descartes R., La G´eom´etrie, Lejda 1637.
4. Euler L, Introductio in analysin infinitorum, Lausanae 1748.
ABSTRACT. Several philosophical issues in connection with computer simulations rely on the assumption that their results are trustworthy. Examples of these include the debate on the experimental role of computer simulations [Parker, 2009], the nature of computer data [Barberousse and Vorm, 2013], and the explanatory power of computer simulations [Durán, 2017]. The question that emerges in this context is, therefore, which methods could be implemented to increase the reliability and trust of computer simulations.
We claim that trust is warranted because computer simulations are deemed reliable processes, that is, there are methods exogenous to the computer simulation itself that confer trust to the results of the simulation. We call this computational reliabilism as a way to show its reliance on, but still independence from process reliabilism. Concretely, while process reliabilism is externalist concerning justifications, we argue that this kind of radical externalism is not possible for computer simulations. Instead, a subdued form of externalism is necessary which allows for at least one instance of the J→JJ principle. That is one can trust the results of a possibly epistemically opaque simulation if one has a method at hand which ensures its reliability. We discuss four such methods establishing reliability namely, verification and validation, robustness analysis for computer simulations, a history of (un)successful implementations, and the role of expert knowledge in simulations [Durán, Formanek 2018]. We conclude by arguing that the general skeptical challenge concerning the universal reliability of such methods is theoretically unsolvable but poses no threat to practicing science with computer simulations.
We illustrate these findings with examples in simulations in medicine. Of particular interest will be to analyze the reliability of two simulations of bone breakage [Keaveny et al. 1994]. The importance of these simulations is that there are at least two different types, namely, of a computerized image of a real hipbone, and of a computerized 3-D grid mathematical model. For both cases, the method implemented by the computer simulations is, in principle, to be trusted more than an empirical experiment. However, researchers need to find ways to grant that trust in the results. We submit that computational reliabilism advances in the right direction, that is, offering grounds for the reliability of computer simulation and trustworthiness of their results.
Barberousse, Anouk, and Vorms Marion. 2013. “Computer Simulations and Empirical Data.” In, edited by Juan M. Durán and Eckhart Arnold. Cambridge Scholars Publishing.
Durán, Juan M. 2017. “Varying the Explanatory Span: Scientific Explanation for Computer Simulations.” International Studies in the Philosophy of Science 31 (1): 27–45.
Durán, Juan M., and Nico Formanek. 2018. “Grounds for Trust: Essential Epistemic Opacity and Computational Reliabilism.” Minds and Machines 28 (4): 645–66.
Keaveny, T M, E F Wachtel, X E Guo, and W C Hayes. 1994. “Mechanical Behavior of Damaged Trabecular Bone..” Journal of Biomechanics 27 (11): 1309–18.
Parker, Wendy S. 2009. “Does Matter Really Matters? Computer Simulations, Experiments, and Materiality” 169 (3): 483–96.
Truth-values for technical norms and evaluative judgements: a comparative analysis
ABSTRACT. The notion of technical or technological knowledge is philosophically interesting insofar as it differs from the forms of descriptive knowledge studied extensively in epistemology and philosophy of science. Whatever the differences between various forms of such knowledge – say knowledge of empirical facts and knowledge of laws or theories – discussion seems overwhelmingly structured by the concepts of truth-values and truth conditions. As for technical knowledge, in contrast, a wide variety of different forms has been proposed – including means-ends knowledge, procedural knowledge, tacit knowledge, know-how – which have in common that it is controversial whether they allow to be analysed through the concepts of truth-values and truth conditions. This is in particular the case for the more specific form of technical norms – which have the form ‘If your goal is to realize A, you should perform action B’, and which have been proposed by Ilkka Niiniluoto as a central constituent of technical or practical research as opposed to scientific research. It has been claimed by Niiniluoto that technical norms have a truth-value, whereas this was earlier denied by Georg Henrik von Wright, who was one of the first philosophers to emphasize the significance of claims with this structure for all disciplines which deal with action. See for a recent discussion Zwart, Franssen & Kroes 2018. In this talk I aim to shed light on this controversy by placing it in a comparative analysis with the work on normative statements by Judith Jarvis Thomson (2008). What makes her work interesting for this comparative analysis is that according to Thomson there is no significant difference between instrumental and moral evaluative claims – say ‘This is a good person’ and ‘This is a good umbrella’ – an approach quite similar to the one adopted earlier by Von Wright. But like Niiniluoto, Thomson claims that such statements have truth-values, a claim she extensively defends in her book. Her defence hinges on an interpretation of evaluative judgements like ‘This is a good K’ where the property ‘a good K’ can only be meaningfully understood as ‘good as a K’ or ‘good qua K’, and by extension ‘good for what Ks are good for’. In addition Thomson claims that all evaluative judgements commend something. When combining these two elements, we have moved considerably in the direction of a judgement of the form ‘If your goal is A, then you should perform action B’. Thus it may be worthwhile to see which of the arguments developed by Thomson, or ascribed to her opponents, can be transferred to the context of philosophy of technology. This in order to argue, finally, much more systematically than has been done so far, the relevance of work from core ‘traditional’ analytic philosophy for outstanding problems in the philosophy of technology and engineering.
J.J. Thomson (2008), Normativity, Chicago & La Salle, IL.
S. Zwart, M. Franssen & P. Kroes (2018), ‘Practical inference – a formal analysis’, in The future of engineering: philosophical foundations, ethical problems and application cases, A. Fritzsche & S.J. Oks, eds., Cham: Springer, pp. 33-52.
ABSTRACT. The influence of Thomas Kuhn’s views about science outside HPS and philosophy in general is enormous. It ranges from natural and social sciences to legal studies, the teaching of writing and literary theory.
In this paper we will address only a few aspects of this astonishingly wide influence, focusing on the reception of Kuhn’s views in the social sciences and literary theory. To our knowledge, the latter has not been discussed at all, and the former has been confined for the most part to social scientists’ attempts to make sense of the history and the scientific status of their disciplines in the light of Kuhn’s developmental model of the natural sciences. We argue that Kuhn’s deeper impact was the support he provided for “the interpretive turn” in the social sciences in the 1970s when his views were received by a number of influential social scientists as undermining “the Geistes- und Naturwissenschaften” divide and thus narrowing the gap between “hard” and “soft” sciences.
This “interpretive turn” made literary theory both more relevant to and more interested in the history and philosophy of science. While a number of its practitioners embraced and /or appropriated the key insights of Kuhn’s views to propose that literary meanings are determined by interpretive paradigms and communities, others used the same insights to characterize both social phenomena and knowledge about them as textual and attempted to bring issues of literary interpretation to bear upon the epistemological issues in the social sciences. Through this double movement of importing Kuhn into literary studies while exporting the problems of textual interpretation to the social sciences, literary theory might be said to have played a particularly significant role in the erosion of strict disciplinary boundaries, which is an important aspect of Kuhn’s legacy.
One major consequence of the interaction between the two fields was the politicization of epistemological positions about knowledge, truth and validity in interpretation. Those who embraced the view that there is no one correct interpretation, no ahistorical, paradigm-independent knowledge and truth argued that universalism is repressive while relativism is liberating. We end by considering the tenability and usefulness of this mode of politicization and its relevance today.
As Mario Bunge celebrates his 100th birthday, this symposium will appraise four different aspects of his life-long contribution to philosophy. The five individual presentations are: Mario Bunge: A Pioneer of the New Philosophy of Science; Mario Bunge’s Scientific Approach to Realism; Is Simplicity a Myth? Mach and Bunge on the Principle of Parsimony; Quantifiers and Conceptual Existence; Bunge and the Enlightenment Tradition in Education.
Bunge was born in Argentina on 21st September 1919. He has held chairs in physics and in philosophy at universities in Argentina, the USA, and since 1966 a philosophy chair at McGill University. He has published 70 books (many with revised editions) and 540 articles; with many translated into one or other of twelve languages.
Bunge has made substantial research contributions to an unequalled range of fields: physics, philosophy of physics, metaphysics, methodology and philosophy of science, philosophy of mathematics, logic, philosophy of psychology, philosophy of social science, philosophy of biology, philosophy of technology, moral philosophy, social and political philosophy, management theory, medical philosophy, linguistics, criminology, legal philosophy, and education.
Bunge’s remarkable corpus of scientific and philosophical writing is not inert; it has had significant disciplinary, cultural and social impact. In 1989 the American Journal of Physics asked its readers to vote for their favourite papers from the journal in the sixty years since its founding in 1933. Bunge's 1956 ‘Survey of the Interpretations of Quantum Mechanics’ was among the 20 top voted papers. In 1993, the journal repeated the exercise this time Bunge’s 1966 paper ‘Mach's Critique of Newtonian Mechanics’ – joined his first paper in the top 20.
Beyond breadth, Bunge’s work is noteworthy for its coherence and systemicity. Through to the mid twentieth-century most significant Western philosophers were systematic philosophers. But in the past half-century and more, the pursuit of systemic philosophy, ‘big pictures’, ‘grand narratives’ or even cross-disciplinary understanding has considerably waned. Bunge has defied this trend. His philosophical system was laid out in detail in his monumental eight-volume Treatise on Basic Philosophy (1974-1989). Individual volumes were devoted to Semantics, Ontology, Epistemology, Systemism, Philosophy of Science, and Ethics. His Political Philosophy: Fact, Fiction and Vision (2009) was originally planned as its ninth volume.
Bunge has applied his systems approach to issues in logic, mathematics, physics, biology, psychology, social science, technology, medicine, legal studies, economics, and science policy.
Bunge’s life-long commitment to Enlightenment-informed, socially-engaged, systemic philosophy is manifest in his being asked by the Academia Argentina de Ciencias Exactas, Físicas y Naturales to draft its response to the contemporary crisis of anthropogenic global warming. Bunge authored the Manifesto which was signed by numerous international associations. Guided by his own systematism he wrote: since climate is not regional but global, all the measures envisaged to control it should be systemic rather than sectoral, and they should alter the causes at play – mechanisms and inputs – rather than their effects. …
Clearly Bunge is one of the most accomplished, informed, wide-ranging philosophers of the modern age. This symposium, held in the year that he, hopefully, celebrates his 100th birthday, is an opportunity for the international philosophical community to appraise his contribution to the discipline.
Alberto Cordero (CUNY (Graduate Center & Queens College), United States)
Four Realist Theses of Mario Bunge
ABSTRACT. 4.1. The Ontological Thesis
Bunge upholds the existence of a world independent of the mind, external to our thinking and representations (Ontological Thesis). His supporting reasoning on this matter draws from both general considerations as well as some of the special sciences.
4.2. The Epistemological Thesis
Bunge complements the previous proposal with an epistemological thesis made of three major claimS:
4.2a. It is possible to know the external world and describe it at least to a certain extent. Through experience, reason, imagination, and criticism, we can access some truths about the outside world and ourselves.
4.2b. While the knowledge we thus acquire often goes beyond the reach of the human senses, it is multiply problematic. In particular, the resulting knowledge is indirect, abstract, incomplete, and fallible.
4.2c. Notwithstanding its imperfections, our knowledge can be improved. Bunge accepts that theories are typically wrong as total, unqualified proposals. In his opinion, however, history shows with equal force that successful scientific theories are not entirely false, and also that they can be improved.
4.3. The Semantic Thesis
This component of Bunge’s realism is framed by the previously stated ontological and epistemological theses. It comprises four interrelated ideas:
4.3a. Some propositions refer to facts (as opposed to only ideas).
4.3b. We can discern the proper (“legitimate”) referents of a scientific theory by identifying its fundamental predicates and examining their conceptual connections in order to determine the role those predicates play in the laws of theory.
4.3c. Some factual propositions are approximately true.
4.3d. Any advance towards the truth is susceptible of improvement.
4.4. Methodological Thesis
The fourth facet of Bunge's realism I am highlighting focuses on methodology and comprises at least three proposals: (a) Methodological scientism, (b) Bunge’s version of the requirement that theories must allow for empirical testing, and (c) a mechanistic agenda for scientific explanation.
4.4a. Scientism asserts that the general methods developed by science to acquire knowledge provide the most effective available exploration strategy at our disposal. The methods of science—whose main use is given in the development and evaluation of theories—use reason, experience, and imagination.
4.4b. A theoretical proposal should lead to distinctive predictions, and it should be possible to subject at least some of those predictions to demanding empirical corroboration.
4.4c. According to Bunge, we cannot be satisfied with merely phenomenological hypotheses of the “black box” type (i.e., structures that do not go beyond articulating correlations between observable phenomena). Good methodology, Bunge insists, presses for further exploration, prompting us to search the world for regularities at deeper levels that provide illuminating explanations of the discovered regularities—ideally “mechanical” ones.
The realism project that Bunge articulates seems, therefore, to have some major issues still pending. Meanwhile, however, I think there is no doubt that Mario Bunge will continue to make valuable contributions in this and other areas of the realist project, responding with honesty and clarity to the enigmas posed by the most intellectually challenging fundamental theories of our time.
Mario Bunge: A Pioneer of the New Philosophy of Science
ABSTRACT. Mario Bunge anticipated many of the ‘post-positivist’ arguments of Hanson, Kuhn, Feyerabend and the Edinburgh Strong Programme that were used to promote a skeptical view of science, a view that became entrenched in the final decades of the twentieth century giving rise to the ‘New Philosophy of Science’ (Brown 1977). But Bunge used the arguments to defend the veracity and value of scientific knowledge. Several years before the irruption of the new philosophy of science, Bunge was developing his own view in a place far away from the most important centers of study of philosophy of science. The result of this work was the publication in 1959 of the first edition of Causality (Bunge 1959). This was a striking event because it was not common for an Argentine philosopher to write a book in English and have it published by a prestigious publisher. But the most important thing is the novelty of the ideas expressed in that book.
At this point it is worthwhile to re-evaluate the image of Bunge. Many believe that Bunge is a physicist who has become a philosopher that defends a positivist doctrine. The reality is quite the opposite. According to his own statements, since a young age he rejected the subjective interpretations of of quantum mechanics and devoted himself to the study of physics in order to obtain the necessary elements to support his position. Bunge defended scientific realism and he argued against naive empiricism as well as against more sophisticated versions that focus knowledge on the activity of the subject.
Quantum mechanics also favoured the questioning of the concept of causality and the validity of determinism. Bunge then undertakes a double task: separating science from a narrow empiricism and reformulating causality and determinism in an adequate way. He proposes to differentiate causation from the causal principle that guides our beliefs and from the formulation of causal laws. It also separates causation from determinism to give rise to non-causal determinisms.
Bunge's position regarding causality explains both his distancing from the interpretation of quantum mechanics provided by some theorists as well as from empiricist and Kantian conceptions that understood causality as a projection of conditions of the cognizing subject. His criticism of empiricism is based on considerations that advance ideas later exploited by anti-realists like Kuhn. However, Bunge's arguments are aimed at rescuing, along with plausible versions of causality and determinism, a realist view of science.
One of the merits of Bunge's Causality book is the prominence that he early gave to ideas that are usually attributed to the "new philosophy of science": the thesis of the theory-ladeness of observation; the conviction that no scientific statement have meaning outside a theoretical system; and the belief that scientific development follows a pattern similar to that of biological evolution, so that scientific progress does not represent a progressive reduction but a progressive differentiation. According to Bunge, this differentiation, pace the “new philosophers of science”, means a genuine cognitive improvement rather than a change of beliefs.
ABSTRACT. Simulations of scientific inquiry are largely based on idealized communication structures. Agent networks structured as a wheel in comparison to completely connected graphs dominate the literature, e.g. (Grim 2009, Zollman 2010). These structures do not necessarily resemble the typical communication between scientists in different fields. The data on team structures, from the high energy physics laboratory Fermilab, show that scientists are usually organized in several groups. Furthermore, the work on Fermilab project efficiencies by Perović al. (2016) allows us to identify the group structures which are usually more efficient than others. They showed that smaller groups outperform large ones. We developed models resembling the team structures in high energy physics analyzed by Perović al. (2016) and tested whether efficiency results coincide.
These models have very interesting properties. First, they show how beliefs can be strongly self-enforced in subgroups, even when most scientists have opposite views. This is particularly evident when we consider that scientists in the subgroups interact more with each other than with scientists from other groups. Our model agrees with the empirical results from Perović al. (2016). Groups separated in as few teams as possible outperform the groups separated in a higher number of teams.
Team structures in science are field-dependent. In biology, laboratories are typically structured hierarchically. We developed models simulating three different management styles: from groups with one leader controlling everybody to groups with two levels of hierarchy. These structures and their effects on group performance were brought up and discussed during qualitative interviews we performed with biologists. We compared a group in which only the professor communicates with twenty junior scientists with a group in which the professor communicates with postdoctoral researchers who in turn supervise PhD students. As last group structure we studied how the communication develops when the group leaders in addition communicate among each other. We also included time constrains: strongly connected nodes communicate less with their partners than weakly connected ones. E.g. a professor might be connected with 20 PhD students, but will not be able to talk to every single one all the time.
When we abstract from time constrains, our simulations indicate that centralized groups reach the conclusion faster than decentralized networks. However, when we consider that professors have a limited time for communication, we can see that groups with additional levels of hierarchy perform much better than the centralized group. In addition, the group structure where the scientists exchange information on the postdoctoral researcher level outperforms the structures where this interaction is lacking. The results highlight that group leaders should communicate with each other. Also, they should not have too many students. Finally, large groups should be decentralized.
References
Grim, P. (2009), Threshold phenomena in epistemic networks, in AAAI Fall Symposium: Complex Adaptive Systems and the Threshold Effect, 53-60.
Perović, S., Radovanović, S., Sikimić, V. & Berber, A. (2016), Optimal research team composition: data envelopment analysis of fermilab experiments, Scientometrics 108(1), 83-111.
Zollman, K. J. S. (2010), The epistemic benefit of transient diversity, Erkenntnis 72(1), 17-35.
ABSTRACT. Our aim is to develop a framework theory of mathematical proving, which is not based on the traditional concepts of mathematical fact and truth, but on the concept of proof-event or proving, introduced by Goguen [2001]. Accordingly, proof is no longer understood as a static purely syntactic object, but as a social process, that takes place at a given location and time, involves a public presentation of a purported solution to a particular problem before an appropriate mathematical community
Sequences of proof-events can be described as activity of a multi-agent system evolving in time. The agents of the system may enact different roles; the fundamental roles are those of the prover (which might be a human or a machine or a combination of them (hybrid proving)) and the interpreter (who generally should be human (person or group of experts) or a machine (or group of machines) or a combination of them). These agents interact between each other at various levels that form an ascending hierarchy: communication, understanding, interpretation, validation.
Different agents may exhibit different capacities (expertise, virtuosities, skills, etc.) during a proving activity, which results in a kind of increased collective capacity directed towards the same-shared goal and enhanced efficiency in achieving the goal (to solve a posed problem).
Proof-events are problem-centered spatio-temporal processes; thus, they have history and form sequences of proof-events evolving in time. The agents might join (or abandon) a proof-event in a sequence of proof-events at a definite time. Accordingly, the system of agents may vary over time.
We attempt to model certain temporal aspect of proof-events, using the language of the calculus of events of Robert Kowalski type [Kowalski, Sergot 1986]. Using the language of the calculus of events, we can talk about proof events and their sequences, evolving in time. The semantics of proof-events follow a logic that can be expressed in terms of Kolmogorov’s calculus of problems [Kolmogorov 1932], initially used for explication of intuitionistic logic [Stefaneas, Vandoulakis 2015].
References
Goguen, Joseph, (2001), “What is a proof”, http://cseweb.ucsd.edu/~goguen/papers/proof.html.
Kolmogorov, Andrei N. 1932. „Zur Deutung der intuitionistischen Logik“, Mathematische Zeitschrift 35, pp. 58–65. English translation in V.M. Tikhomirov (ed.) Selected Works of A.N. Kolmogorov. Vol. I: Mathematics and Mechanics, 151-158. Kluwer, Dordrecht, 1991.
Kowalski, Robert and Sergot Marek. 1986 “A Logic-Based Calculus of Events”, New Generation Computing 4, pp. 67–95.
Stefaneas, P. & Vandoulakis, I. (2015), “On Mathematical Proving”. Computational Creativity, Concept Invention, and General Intelligence Issue. Journal of General AI, 6(1): 130-149.
Sympletic Battlefronts. Phase Space Arguments for (and against) the Physics of Computation
ABSTRACT. In this talk, I will defend the idea that we can exploit phase geometrical resources instead of physically meaningless information-theoretical concepts to gain insight on the so-called physics computation (encompassing quantum and statistical analyses of informational processes), which for many authors is “not a science at all” (Norton; 2005). By assessing all the phase arguments existent in the literature defending or attacking the physics of computation, it can be concluded that the possibility of a Maxwell’s demon (namely, system’s entropy can be diminished by manipulating its constituent particles) and Landauer’s principles (i.e. erasing information entails an increase of entropy) depends ultimately on “blending” (Hemmo and Shenker; 2010) and “squeezing” volume-preserving transformations of macrostatistical phase volumes, respectively. Notice that within Boltzmannian statistical mechanics the phase volume of a macrostate corresponds to the thermodynamical entropy of that macrostate.
Given this “Macrostatistical Representational Underdetermination”, wherein the validity of either Maxwell’s demon or Landauer’s principle depends not on how the world is but moreover on the choice of phase space partition or macrostatistical settings, I posit a threefold criterion to select the Best Representational System for the statistical mechanics of computation: namely, the one that can infer that maximal quantity of macrostatistical deterministic data (individual particle’s position, momentum, etc.) at a minimum of entropic cost for a given probabilistic fitness. I will suggest that the macrostatistical system having the most optimal combination of (i) entropic efficiency, (ii) inferential power and (iii) dynamical fit is the one mechanically compatible not with Maxwell’s demon (as defended by Albert; 2000) but otherwise the one affine to Landauer’s principle.
Additionally, I will also defend that the received view on Landauer’s principle, mostly inherited from Bennett (1982), must be reconsidered (i) by avoiding physically insignificant information-theoretical approach, (ii) without relying on Szilard-Bennett method (focused on toy model cases) and (iii) based on a rigorous phase spatial framework. In this line, I will follow both Norton’s fluctuation-centric position concerning Maxwell’s demon exorcisms (Norton; 2018) and Ladyman’s naturalistic modal epistemological reinterpretation of Landauer’s principle. Under these strong assumptions, I provide a simple but robust phase spatial argument for a reinterpreted Landauer’s principle (Ladyman; 2018). Then, I will conclude that, in spite of not being a universal principle of physics, Landauer’s principle can tell us something physically meaningful about how much microstatistical dynamical information it can be extracted from a particular macrostatistical apparatus, without any use of exotic information-theoretical concepts.
Digital determination and the search for common ground
ABSTRACT. When discussing the possible solutions to the problem of bridging the communicative gap between academic communities, we should consider the contemporary context, and in particular, digital transformations of social ontology. New technological paradigms give rise to a digital world, where digital determination affects all kinds of interactions, leading to computerization of life itself. “Digital optimists” view the process of digitalization as a tool to improve all kinds of scientific interaction. R. Collins sees "social networks" as a kind of referee in the process of accepting innovations, capable of demonstrating a certain level of resistance. “Digital pessimists” emphasize that digital determination give rise to such phenomena as “content viewers” and "the lease of knowledge", which cause cognitive deformations. Content viewers scan the information in a superficial way, without translating it to personal knowledge. Due to the fact that «the man of the digital era» seeks to find and adopt ready-made information resources copyright has lost its value, which is a disturbing symptom for scientific cohorts.
However, for the academic community, the principle of “Publish or Perish” is still important. Plagiarism, compilation, aggressive scientific discourse is unacceptable. References as an "academic component of science" are absolutely essential. References are viewed as an indicator of authorship and reliability of scientific discoveries, as well as scientists’ social responsibility. The level of academic qualifications is essential for assessing the development of science. At the same time, disciplinary cohorts have always been self-sufficient and spoke different languages. This hindered the understanding of scientific discoveries in terms of human culture, but had its roots in the algorithm of academic education.
Today, methodological strategy of bridgingthe gap between academic communities is associated with convergence. As the basis for convergence, the paradigm of environmentalism, accompanied by an ideology of conscious regulation, is proposed. Attention is also drawn to the synergistic approach and thematic analysis as effective convergence strategies.
In our opinion, availability of specialized disciplinary knowledge can be ensured by an open scientific narrative that enables one to speak about the situation in science, about discoveries and prospects in plain language. The scientific narrative as a kind of “mapping science” is aimed at overcoming the “semantic trap” of a complex disciplinary language. It should be emphasized that the scientific narrative contributes to creating a scientific worldview. Its advantages are due to the fact that scientific worldview includes the elements of methodological and logical analysis, "smart techniques" of conviction and expert decisions.
All the above said brings us to the following conclusions. Firstly, in the context of digitalization the narrative acts as the form of metascientific knowledge aimed at integrating and popularizing the achievements of academic community. Secondly, success depends on a scientist’s personality, his professional and humanitarian culture. Thirdly, the narrative as the ground for bridging the gap between academic communities is needed for coordinating scientific activities in complex situations in the digital era.
Giulia Battilotti (Department of Mathematics University of Padua, Italy) Milos Borozan (Department of Health Sciences University of Florence, Montenegro) Rosapia Lauro Grotto (Department of Health Sciences University of Florence, Italy)
A discussion of bi-logic and Freud's representation theory in fromal logic
ABSTRACT. Matte Blanco introduced the Bi-logic theory, describing the symmetric and the bivalent mode, in order to explicit the logical features of the Freudian primary and secondary processes respectively. It allows for a comprehensive description of the properties of the conscious and unconscious mental representations as well.
It is possible to interpret Bi-logic in predicate logic. The idea is that variables and closed terms of first order language allow different readings of the domain of predicates, which correspond to the symmetric and bivalent mode respectively. Furthermore, we recall Freud’s first characterization of representation and its manifestations: the “open” one, that is, the thing-presentation versus the “closed” one, namely the word-presentation. The interpretation in predicate logic allows for a clear illustration of the subsequent theoretical grounding of the primary and secondary process on the two aforementioned types of representations.
Since, formally, our interpretation can be conceived rethinking the notion of term, logical formalisations aiming to overcome the limitations given by such a notion could be significant for a further analysis. Very important proposals in this sense are: Gödel’s modal system S4 and Girard’s linear logic. In our view, the modality of S4 allows to convert the abstract element which is contained in the representation into a normative element, allowing for a logical reinterpretation of the notion of Super-ego; linear logic with its modalities would allow to describe some of the features of the unconscious thinking (such as displacement). As a further development, the features of linear logic could offer the opportunity to consider the quantitative aspects of drives.
References
[BBL] Battilotti, G., Borozan, M., Lauro Grotto, R. Reading Bi-logic in first order language, submitted.
[Fr] Freud S. Zur Auffassung der Aphasien. Eine kritische Studie, Franz Deutictke, Leipzig und Wien, 1891.. it. di L. Longato, L’interpretazione delle afasie, Sugarco, Milano 1989.
[Go] Gödel, K. in Feferman, S., Kleene, S., Moore, G., Solovay, R. and van Heijenoort, J. (eds.) (1986), Collected Works I: Publications 1929-1936, Oxford University Press, Oxford.
[MB] Matte Blanco, I. The unconscious as infinite sets, Duckworth, London, 1975.
17:15
Chi-Hsiang Chen (Institute of Philosophy of Mind and Cognition, National Yang MIng University, Taiwan)
Using Repertoire to Understand Psychotherapy
ABSTRACT. Psychotherapy is a multidisciplinary field involving psychiatrists, clinical psychologists, and mental health counselors. It is debatable that whether psychotherapy is best considered as part of medical sciences. This is partly because some clinical psychologists and mental health counselors are not trained as medical doctors/scientists. But this paper aims to show that, if we use the notion of repertoire to characterize and understand the psychotherapy field, it is not worthwhile to ask whether or not psychotherapy is part of medical sciences. Ankeny and Leonelli (2016) propose to use the notion of repertoire to characterize the performative, social, financial, and organizational conditions that enable research communities to organize themselves to conduct their professional activities. They originally apply the notion of repertoire to analyze large-scale and multidisciplinary projects in biological sciences. I aim to apply their notion to the psychotherapy field in order to reveal the performative, social, and cultural conditions that enable the psychiatrists, clinical psychologists, and mental health counselors in Taiwan to work together as a professional community. I will use a case study to demonstrate how the notion of repertoire can be applied to reveal the relevant performative, social, and culture conditions. The case study is about how the practice of family therapy originated in North America are transferred to and adopted by the psychotherapists in Taiwan. Based on the case study, I aim to establish that it is more worthwhile to focus on analyzing the relevant performative, social, and culture conditions that enable Taiwanese psychotherapists to acquire the relevant professional skills and improve them by performing both clinical research and counseling practice. This kind of analysis tracks how professional knowledge in a given community is transferred, replicated, and improved. It captures the content of knowledge and the processes by which a given community produce, maintain, and improve their knowledge. This is what is epistemologically valuable to a given professional community. Thus, instead of debating whether or not psychotherapy is part of medical sciences or not, I propose that it is better to use the notion of repertoire to analyze the relevant conditions that are epistemologically valuable.
————
Reference:
Ankeny, Rachel A., Leonelli Sabina. “Repertoires: A post-Kuhnian perspective on scientific change and collaborative research.” Studies in History and Philosophy of Science, 2016
ABSTRACT. In an early 2018 paper, Hasok Chang argued that pluralism is fully compatible with scientific realism, contrary to a common, wide-spread impression. In this way, Chang gets on board along with Nancy Cartwright (1999), John Dupré (1993) and Matthias Lynch (1998) in defending a more robust type of scientific pluralism –a metaphysical pluralism. It claims that a pluralist stance is the adequate attitude not only to account for the plurality of scientific theories and their multidimensional empirical success, but also to regard reality: metaphysical pluralists hold that reality is not unique but rather plural, rich in properties and things (Cartwright 1999: 1), and science may have literal knowledge of such plurality. They claim that one can grant “the reality of phlogiston and positrons in the same sense as the reality of “medium-sized dry goods” (Chang 2018: 181). To make sense of such claims, metaphysical pluralists decline that science may have epistemic access to Reality in itself (the Kantian noumenon). They rely on a Kantian correlational structure (Lynch 1998: ch.1, Chang 2018: 181-182) in which cognitive agents play an active role in knowledge: ontological claims are always relative to a conceptual framework that constitutes the knowledgeable reality. Metaphysical pluralists claim that their stance is better than scientific monism in accounting for scientific practice and what the world is like.
In this paper, we shall argue that metaphysical pluralism falls quite short of its intending aims. We shall claim that metaphysical pluralism is either a sort of scientific skepticism or is at risk of trivialization. Firstly, we shall show that scientific monism and metaphysical pluralism are not even competing to the extent to which they speak of reality differently: the metaphysical pluralist is pluralist with respect to a deflated, constituted reality, though it is forced to accept, as the scientific monist does, that a unique, unknowable Reality must exist (for she supports a somewhat Kantian structure). However, the scientific monist claims that Reality is unique (as the metaphysical pluralist does), but she claims that such a Reality is knowable. So, they are not metaphysically competing about if reality is unique or plural (‘plural’ and ‘unique’ are predicates of different types of realities), but whether Reality is knowable or not. To support that Reality in itself is unknowable is a skeptic thesis, therefore it is an epistemic thesis, not a metaphysical one. Secondly, we will show that in order for metaphysical pluralism to be non-trivial, it must be able to sharply draw a distinction between “having a difference concept of x” and “having a different belief about x”. As the metaphysical pluralist cannot draw this distinction, her position is at risk of trivialization.
References
Cartwrtight, N. (1999). The Dappled World. A Study of the Boundaries of Science, Cambridge: Cambridge University Press.
Chang, H. (2018). “Is pluralism compatible with scientific realism”, in Routledge Handbook of Scientific Realism (Juha Saatsi Ed.), New York: Routledge, pp. 176-186.
Dupré, J. (1993). The Disorder of Things: Metaphysical Foundations of the Disunity of Science. Cambridge, MA: Harvard University Press.
Lynch, M. (1998). Truth in context. Cambridge: The MIT Press.
ABSTRACT. Perspectivism has been always an attractive epistemological position. Also, it has become a very suggesting option in the field of philosophy of science. Two recent perspectivist approaches to science are those of Nancy Cartwright and Ronald Giere. However, there are in perspectivism some metaphysical problems very difficult to deal with. One problem is whether all reality could consist of a number of perspectives. Another one is how to understand the subjects that adopt the perspectives. Are they only contents of other perspectives? Can that be repeated indefinitely? In both problems we face a kind of ontological version of the old trilemma of Agrippa: either circularity, or regression, or some sort of non-perspectival foundation.
In our contribution, 1) we will present some important results in recent analyses of the notions of points of view and perspectives, and 2) we will discuss the above introduced two metaphysical problems. In order to face those problems, we propose a strategy different from the options involved in the Agrippa’s trilemma. The strategy consists in understanding perspectives as a number of lanterns or torches illuminating certain areas while we move. This model is really powerfull. If perspectives are understood that way, then it is easy to reject the possibility of a complete circular construction of reality as a world of perspectives as well as the possibility of a complete circular construction of the subjects adopting them. Also, we can reject the strategy of regression on the basis that only some short series of perspectives on other perspectives can be in fact carried on. Finally, we can reject non-perspectival foundations. In particular, we can reject the transcendentalist Kantian foundationalist picture of a world entirely configurated or constructed by the subjects having access to it.
If perspectives are understood according to that model, then we arrive to a metaphysical conception of perspectivism as a kind of never complete applied ontology. However, this would not be a limitation of defect. That perspectivism as ontology only can make sense as a sort of applied ontology of some specific domains would be a direct consequence of the real ways we have access to the world. In the context of such a perspectivism, we will defend the plausibility of a non trascendentalist interpretation of Davidson’s “triangulation strategy”. Both the notions of an objective world and the notion of a subject having some contents in perspective about that world can be understood as the result of particular and concrete dynamics between at least two subjects in discursive interaction and a certain entity that is in the focus of the perspectives of those subjects.
According to the real ways we have access to the world, the metaphysics of perspectivism, in particular of a scientific perspectivism, has to be closely linked to the idea of an ontology always applied to specific domains.
Cartwright, N. (1999) The Dappled World, Cambridge Univ. Press.
Davidson, D. (2001) Subjective, Intersubjective, Objective, London, Clarendon Press. Giere, R. (2010) Scientific Perspectivism, Chicago, Univ. of Chicago Press.
ABSTRACT. Recently, one of us presented a paper on the history of “algorithmic discovery” at an academic conference. As we intend the term, algorithmic discovery is the production of novel and plausible empirical generalizations by means of an effective procedure, a method that is explicitly represented and executed in finitely many steps. In other words, we understand it to be discovery by computable algorithm. An anonymous reviewer for the conference saw things differently, helpfully explaining that “[a]nother, more common name for algorithmic discovery would be heuristics.” This comment prompted us to investigate further to see what differences (if any) there are between heuristics and algorithmic discovery.
The aim of this paper is to compare and contrast heuristics with algorithmic discovery and to explore the consequences of these distinctions within their applications in science and other areas. To achieve the first goal the term ‘heuristic’ is treated as a family resemblance concept. So for a method or rule to be classified as a heuristic it will have to satisfy a sufficient number of the properties involved in the family resemblance. There are ten features that we specify that are involved with being a heuristic. The first of these corresponds to Polya’s project developed in How to Solve it and other works. The next five correspond to the heuristic search program in artificial intelligence. The last three pick out more general characterizations of heuristics as methods that lack a guarantee, are rules of thumb, or transform one set of problems into another. We argue that there are methods of algorithmic discovery that have none of the ten features associated with heuristics. Thus, there are methods of algorithmic discovery which are distinct from heuristics.
Once we’ve established that heuristic methods do not exhaust the methods of algorithmic discovery, we compare heuristic methods with non-heuristic discovery methods in their application. This is achieved by discussing two different areas of application. First, we discuss how heuristic and non-heuristic methods perform in different gaming environments such as checkers, chess, go, and video games. On the side of heuristics Deep Blue and Chinook are discussed. AlphaGo Zero and DQN provide examples of non-heuristic programs. We find that while heuristics perform well in some environments – like chess and checkers – non-heuristic methods perform better in others. And, interestingly, hybrid methods perform well in yet other environments. Secondly, heuristic and non-heuristic methods are compared in their performance in empirical discovery. We discuss how effective each type of method is in discovering chemical structure, finding diagnoses in medicine, learning causal structure, and finding natural kinds. Again, we find that heuristic and non-heuristic methods perform well in different cases. BACON and DENDRAL provide examples of heuristic methods while support vector machines and the PC algorithm provide examples of non-heuristic methods.
We conclude by discussing the sources of the effectiveness of heuristic and non-heuristic methods. Heuristic and non-heuristic methods are discussed in relation to how they are affected by the frame problem and the problem of induction. We argue that the recent explosion of non-heuristic methods is due to how heuristic methods tend to be afflicted by these problems while non-heuristic methods are not.
ABSTRACT. Often general obligations are what Humberstone in [3] called agent-implicating: the subject of the obligation is also the agent of the action in question. Consider, for instance, the claim
(A) Killing is forbidden.
The most natural interpretation of (A) is that (1) for all x, it is obligatory for x that x does not kill, and not (2) for all x, it is obligatory for x that for all y, y does not kill. What the latter would come to is that everybody is under an obligation that nobody is killed, and this is not what we normally mean by (A). Compare this, however, with a quantified version of an example in [4]:
(B) Managers are under an obligation that their company’s financial statement is reported to the board.
As McNamara points out, a manager fulfills this obligation even if her assistant files the report. Therefore, unlike in the case of (A), an interpretation of (B) that is not agent-implicating is most plausible. Another important difference between both obligations is that, whereas (A) applies to everybody in the domain, (B) only applies to those who satisfy a certain condition. We shall call the latter obligations role-specific.
In this talk we will present a term-modal deontic logic (TMDL) that distinguishes between and can capture reasoning with the above-mentioned agent-relative obligations: general obligations that are role-specific or not, and that are agent-implicating or not.
TMDL is based on the epistemic term-modal logics from the work of Fitting, Thalmann and Voronkov. In [1] they extend first-order logic with modal operators indexed by terms. This allows them to quantify both over the objects in the domain and over accessibility relations that are linked to those objects. They give sound and complete sequent-style and tableau-based proof systems with respect to an increasing domain semantics.
We will argue that increasing domain semantics are needlessly complicated when it comes to capturing deontic reasoning, and show that the constant domain semantics of TMDL is more appropriate for capturing deontic reasoning with agent-relative obligations. We will also include identity and illustrate how this increases the expressivity of the logic and its ability to capture deontic reasoning. Partly based on [5], a sound and strongly complete Hilbert-style axiomatization of TMDL will be given.
We will compare our approach to different accounts for agent-relative obligations from the literature (such as Humberstone’s agent-implicating obligations from [3], McNamara’s personal obligations from [4], and Hansson’s general obligations from [2]), and show that we are not only able to combine the available notions in a natural way, but that we can also model them in a more fine-grained way. We will argue that precisely because of this, we can do better justice to the various kinds of general obligations in natural language.
References
[1] Melvin Fitting, Lars Thalmann, and Andrei Voronkov. Term-modal logics. Studia Logica, 69(1):133–169, Oct 2001.
[2] Bengt Hansson. Deontic logic and different levels of generality. Theoria, 36(3):241–248, 1970.
[3] I. Lloyd Humberstone. Two sorts of `ought’s. Analysis, 32(1):8–11, 1971.
[4] Paul McNamara. Agential obligation as non-agential personal obligation plus agency. Journal
of Applied Logic, 2(1):117–152, 2004.
[5] Richmond H. Thomason. Some completeness results for modal predicate calculi. In K. Lambert, editor, Philosophical Problems in Logic, volume 29 of Synthese Library, pages 56 – 76. Springer Netherlands, 1970.
ABSTRACT. Given a set of norms N within a legal system L it is normally said that N is a sharp set. This, of course, has a significant consequence: It means that for all normative situation it is possible to decide whether such situation can be interpreted in terms of N or not. According to this point of view legal systems are complete; this means that there is a clear distinction between lawlessness and what is complete legal.
Nevertheless, the reality is quite different: Norms are sometimes vague or ambiguous. Of course, vagueness and ambiguity are different semantic phenomena, the last is perhaps less complex than the first. If an expression is ambiguous, it is enough to specify the sense in which the expression is being used and the problem is sorted out, but in the case of vague expressions the solution is not so easy. In such cases: how do judges make decisions? Not exclusively based on the content of the norms but maybe appealing to other rational resources such as analogy or ponderation.
Law is vague in at least two senses. In the first sense, it means that legal terms are vague or fuzzy. Think for example in criminal responsibility. Criminal law distinguishes between perpetrator of the crime and the accomplice and distinguishes between primary and secondary accomplice. So, criminal responsibility could be understood as a matter of degree: the perpetrator and the (primary or secondary) accomplice are responsible in different degrees and, by this reason, they receive different penalties.
The second sense is more interesting. In this sense we are not talking about terms but about the norm as a fuzzy concept. It means that there are either/or norms, that is: norms that are clearly part of the legal system but there are also norms that are not part of the legal system (v.g., moral rules or principles, customary rules). It means, then, that legality is not a precise concept but a fuzzy one.
This is what Oren Perez has called quasi-legality. What we will do is to analyze from the fuzzy logic point of view the relevance of quasi-legality for contemporary legal theory and the its consequences for legal positivism.
Ren-June Wang (Department of Philosophy, National Chung-Cheng University, Taiwan)
Knowledge, Reasoning Time, and Moore’s Paradox
ABSTRACT. One way to increase knowledge is to activate the reasoning organs to perform a series of inferences. These inferences will bring out the hidden information in the possessed knowledge of the reasoners, such that they can then access these pieces of information and utilize them. However, this temporal aspect of the reasoning knowledge is normally implicit in our daily conversation, and ignorant in the traditional possible-world-semantics treatment of epistemic concepts. In this paper, we argue for the naturality and importance of the concept of reasoning-based knowledge, in contrast with information-based knowledge, and provide two epistemic logical systems which incorporate formulization of this reasoning-based concept of knowledge. These systems will be discussed and compared. Then I will demonstrate an application of the systems, with a detour to discuss Moore’s paradox from the reasoning-based knowledge point of view.
ABSTRACT. In its simplest version, the Liar paradox involves a sentence which says of itself that it is false. If the sentence is true, then it is false (for that is what it claims), but if it is false, then it is true. So we have a paradox. A common initial reaction to sentences like ‘This is false’ or ‘L is false’ (where ‘this’ and ‘L’ refer respectively to the sentences where they appear) is to claim that they are meaningless. If that claim could be justified, we would have a simple and elegant solution to the paradox. Unfortunately, there are powerful reasons for thinking that both sentences are meaningful: 1) the fact that we reach a paradox by reasoning about those sentences would be unexplainable unless they had some sort of meaning; 2) there are many contexts in which ‘this is false’ is not paradoxical (when ‘this’ is used to refer to another sentence). This is what happens with contingent Liar sentences. ‘Everything I’ve said today is false’ is a perfectly meaningful sentence. In most contexts, someone will say something true or false by uttering it. Yet that will not be the case if the person who utters it has uttered no other sentence that day (or has only uttered false sentences). For these reasons, philosophers who claim that Liar sentences are meaningless tend to make a distinction between what we could call “linguistic meaning” (a set of rules telling us how to use a sentence in order to say something true or false) and “content” (the proposition expressed or the statement made in a context in uttering a sentence given its linguistic meaning). Those philosophers claim that Liar sentences are meaningful in the first sense, but not in the second (in paradoxical contexts). Someone uttering a Liar sentence in a paradoxical context fails to express some content.
A common way to justify why Liar sentences lack content in certain contexts appeals to the idea of “ungrounded sentence” (Kripke 1975) or to the idea that, in paradoxical contexts, the truth value of a Liar sentence depends (circularly) on a sentence from which we cannot eliminate that predicate. This idea is present in theories which treat the truth-predicate as a property-ascribing predicate (Beall 2001, Goldstein 2001) and also in theories which treat the truth-predicate as a device for forming prosentences (Grover 1977). The object of this paper is to show that the strategy followed in all these cases declares contentless sentences which are meant to be true in all those theories.
REFERENCES:
Beall, Jc, 2001, “A Neglected Deflationist Approach to the Liar”, Analysis 61: 126–129.
Goldstein, L., 2000, “A Unified Solution to Some Paradoxes”, Proceedings of the Aristotelian Society 100: 53–74.
Grover, D. L., 1977, “Inheritors and Paradox”. The Journal of Philosophy 74: 590–604.
Kripke, S., 1975, “Outline of a Theory of Truth”, Journal of Philosophy 72: 690-716.
This special session is devoted to the presentation of the 2019 IUHPST Essay Prize in History and Philosophy of Science. The prize question for this round of competition was: "What is the value of history of science for philosophy of science?" This question was intended as a counterpart to the question for the inaugural run of the prize in 2017, which asked about the value of philosophy of science for history of science. The session will include the presentation of the prize, a lecture by the prize-winner (60 minutes), and a period of discussion with members of the audience.
This session is offered as part of the set of symposia organized by the Joint Commission, which serves as a link between the historical and the philosophical Divisions of the IUHPST.
ABSTRACT. Recent work on the use of historical case studies as evidence for philosophical claims has advanced several objections to this practice. Our two-fold goal is first to systemize these objections, showing how an appropriate typology can light the path toward a resolution, and second, to show how some of these objections can be recast as advantages for the historically sophisticated philosopher, specifically by describing how attention to contingency in the historical process can ground responsible canonicity practices.
Systematizing objections to the use of historical case studies for philosophical ends shows that they fall largely into two categories: methodological objections and metaphysical objections. The former, we argue, fail to be distinctive—they do not identify special challenges from other forms of philosophical reasoning are immune. Case studies demand responsible handling, but this is unsurprising. History is messy and philosophy is difficult. But the need for care is hardly the mark of a hopeless endeavor. Rather, attention to the ways in which history is messy and in which philosophy is difficult can be resources for developing better historiographical and philosophical practices.
Metaphysical objections do, however, raise special problems for the use of historical case studies. We show that attention to what makes for a canonical case can address these problems. A case study is canonical with respect to a particular philosophical aim when the philosophically salient features of the historical system provide a reasonably complete causal account of the results of the scientific process under investigation. We show how to establish canonicity by evaluating relevant contingencies using two prominent examples from the history of science: Eddington’s confirmation of Einstein’s theory of general relativity using his data from the 1919 eclipse and Watson and Crick’s determination of the structure of DNA. These examples suggest that the analogy between philosophical inquiry and the natural sciences, although imperfect, has important elements that make it worth retaining. This is not to say that we should think of philosophy as modeled on scientific practice, but rather that both succeed by virtue of something more general: their reliance on shared principles of sound reasoning.
Taking seriously the practices necessary to establish the canonicity of case studies makes clear that some examples of the historical process of science are more representative of its general ethos than others. With historiographical sense, we can pick these examples out. Doing so requires attention to the contingencies of history. Rather than undermining the use of historical cases, philosophical attention to contingency aids the development of case studies as resources by making explicit otherwise tacit assumptions about which features of them are most salient and why.
These considerations help us address the question of the value of history of science for the philosophy of science. It is possible, even easy, to use the rich resources that history provides irresponsibly to make a predetermined point. But that is not a genuine case of history of science informing philosophy of science—in part because it proceeds in the absence of historiographical sense. By outlining the practices that render particular cases canonical for certain philosophical aims, we have offered a route by which such sense can be integrated into standard philosophical practices.
18:30
Max Dresow (University of Minnesota, Department of Philosophy, United States)
CANCELLED: History and Philosophy of Science After the Practice-Turn: From Inherent Tension to Local Integration (19:00-19:30)
ABSTRACT. Over the past several decades, a pernicious myth has taken hold. This is the myth that history and philosophy of science are intrinsically opposed to one another: as if the two fields have timeless essences that tug against each other in equally timeless tension. The myth has a certain fascination for purists on both sides, and even speaks an important truth about the current configuration of disciplinary standards. But it is a myth nonetheless, and one that stands in the way of a more productive analysis of the value of history of science for philosophy of science.
My goal in this essay is to explode this myth, and in so doing to indicate a more fruitful way of analyzing the value of history of science for philosophy of science. Specifically, I wish to ask: what roles do historical sources and information play in the practice of philosophers of science? The purpose of framing the question in this way is to direct attention away from issues of global, disciplinary integration, and towards issues of local, problem- and method-based integration. This is where the action is, practically speaking—“in the trenches” where real philosophical research is done. To preview my conclusion, historical sources and information play a variety of roles in contemporary philosophical practice, each of them methodologically legitimate and philosophically well-motivated. In particular, I will show that different methodological approaches in philosophy of science (and specifically, in practice-based philosophy of science) use historical information in different ways, guided by different local ends. History of science matters to philosophy of science—but the mode of this mattering is plural, and so is the needed integration.
My argument is set out in three broad sections. In the first, I explore in greater detail the “myth of inherent tension”: the notion that history of science and philosophy of science are intrinsically opposed to one another. The crux of this myth is the supposed opposition between philosophy’s normative and universalist orientation and history’s uncompromising particularism. Because these perspectives are set at cross-purposes, no reconciliation between the two disciplines is possible, or even desirable. I show that this argument is based on a faulty assumption, as well as a descriptively inadequate conception of philosophical practice. Replacing this with a more ecumenical conception is crucial to gaining purchase on the focal question: what is the value of history of science for philosophy of science? But the reigning conception is deeply ingrained, and for that reason difficult to see around. It is the task of Section 3 to say why this is so, and ultimately to point the way towards a more up-to-date conception of philosophical practice. Finally, in Section 4, I take a run at the focal question, first, by articulating three methodological approaches in practice-based philosophy of science, and second, by showing how each approach engages history of science in light of its philosophical goals. The paper concludes with a brief discussion of philosophical normativity, and how conceptualizing philosophy as a practice alleviates some lingering concerns.
Revising Logic: Anti-Exceptionalism and Circularity
ABSTRACT. According to anti-exceptionalism (AE) about logic, (i) logical laws do not have any epistemologically or metaphysically privileged status; in particular, they are neither a priori, nor analytic, nor necessary. Rather, (ii) logical theories should be justified, revised and compared just as scientific ones are – that is, via an abductive methodology.
I will first try to clarify claim (i), by reviewing which properties AEs think logical laws should be deprived of. It will turn out that there is a substantial disagreement on what logic (allegedly) cannot be, the only agreed upon feature being non-apriorism; furthermore, it seems that AEs use ‘a priori/a posteriori’ in an unspecific sense, in that they do not make reference to empirical kinds of evidence, but rather equate non-aprioricity with revisability. I will then move on to (ii), and try to unpack the abductive methodology and its criteria. In order to do this, I will first review the main implementations of the AE model – namely, Priest’s (2016) and Williamson’s (2017); secondly, I will take a closer look at the abductive criteria, and in particular at the most prominent among them, that of adequacy to data.
I will then advance some objections to the AE view, which stem from a well-know argument in the philosophy of logic, i.e., the Centrality Argument (CA; e.g. Putnam 1978). According to CA, logical laws are so central in rational reasonings that any attempt either to revise or to justify them ends up using those laws themselves and, so, winds up being illegitimate. I will build some forms of CA that are specifically targeted against the AE account, and show that the latter is at several levels threatened by circularities, both when evaluating theories with respect to given abductive criteria, and when performing the general computation.
Hence, though AEs are right in claiming that logical theories have often been revised through an abductive methodology, their account faces some serious (broadly definable) metatheoretical objections. I will conclude by proposing a way of reconciling these two opposing cases, which resorts to Priest’s (2014) distinction between logica docens and logica ens – that is, between what logicians claim about logic, and what is actually valid. I will argue that AEs seem to submit only that logica docens is revisable, while remaining silent on logica ens’ fate; on the other hand, a minimal form of CA shows only that we can neither revise nor justify the laws of the correct logic – i.e., of logica ens – whatever this logic is. Hence, some compatibility can be worked out, at least between modest versions of the two opposing positions.
References
Putnam H., 1978, «There Is At Least One A Priori Truth», Erkenntnis, 13, 1, 153-170.
Priest G., 2014, «Revising Logic», in Rush P. (ed), The Metaphysics of Logic, 211-223, Cambridge, CUP.
Priest G., 2016, «Logical Disputes and the A Priori», Logique et Analyse, 59, 236, 347-366.
Williamson T., 2017, «Semantic Paradoxes and Abductive Methodology», in Armour- Garb B. (ed), Reflections on the Liar, 325-346, Oxford, OUP.
18:30
Zeynep Soysal (University of Rochester, United States)
Independence and Metasemantics
ABSTRACT. One of the key issues in the debate about pluralism in set theory is the status of sentences that are independent of the axioms, such as the Continuum Hypothesis. In this paper, I argue that one can make progress on this issue by appealing to a metasemantic constraint, i.e., a constraint on what determines the meanings of sentences. According to this constraint, one’s intentions and willingness to adjust one’s beliefs are relevant to determining the content of what one expresses. More specifically, a factor, F, is relevant to the meanings of one’s utterance only if one is disposed to adjust one’s relevant beliefs to F, provided one is rational. I call this thesis ‘Compliance’. Similar principles have been held even by prominent proponents of externalist views, such as Saul Kripke. But such principles have never been explicitly argued for, nor have people drawn any of their important consequences.
After having established Compliance, I apply it to the case of set theory. I start from the observation that set theorists are unwilling to adjust certain beliefs about sets that go beyond the standard axioms to any kinds of factors, for example the belief expressed by ‘The universe of sets is maximally high’. Given that set theorists clearly do possess the concept of set, by Compliance, it follows that they do not possess the concept of set in an externalist way. This leads to a kind of partial descriptivism, i.e. the view that such beliefs also contribute to determining the content of set-theoretic expressions like ‘set’, ‘membership’, and ‘universe of sets’. And this means, in turn, that the view that the content of set-theoretic expressions is given only by the axioms of set theory—the so-called ‘algebraic’ or ‘model-theoretic’ conception of set theory—is untenable. This is hugely important, because the algebraic conception is an underlying assumption in the famous model-theoretic arguments for the indeterminacy of set theoretic language, which form the basis for pluralist views of the universe of sets (so-called ‘multiverse’ views). The algebraic conception, coupled with the mathematical fact that there are many radically different models of the ZFC axioms—for instance, models in which ‘there is an inaccessible cardinal’ is true, models in which it is false, models in which the continuum hypothesis is true, models in which it is false—is supposed to show that all these sentences that are independent from the axioms of ZFC have indeterminate truth-values. If what I am saying is right—that set-theorists are unwilling to adjust more than the axioms, and that these resilient beliefs are content-determining—then this whole argumentative structure for indeterminacy and pluralism collapses.
So, more generally, what we get from Compliance is a program for finding out what, besides set-theoretic axioms, has determinate truth-values in set theory: We have to look at what set-theorists are unwilling to adjust. The questions of whether independent sentences have truth-values, and, if so, how we can come to know their truth values, are among the most pressing questions in the philosophy of mathematics since Gödel’s incompleteness results. As my paper demonstrates, we can make progress on these questions by doing metasemantics.
Top-down inhibitory experiments: the neglected option
ABSTRACT. In the new mechanistic literature, three types of inter-level experiments for determining constitutive relevance are frequently discussed. They differ from one-another with respect to the direction of influence (top-down versus bottom-up) and the type of intervention (excitatory or inhibitory). Both excitatory and inhibitory bottom-up experiments (stimulation and interference respectively) are recognised, but only excitatory top-down experiments (activation) are usually discussed. While the possibility of top-down inhibitory experiments is recognised by Craver (Explaining the Brain, Oxford: Clarendon, 2007) in a footnote, no full-scale treatment exists. This contribution rectifies the absence.
I argue that there are two distinct types of top-down inhibitory experiments: deprivation and cessation experiments. Deprivation experiments are the type of experiments mentioned in Craver’s footnote. They involve large-scale lesions or disruptions to the system exhibiting the explanandum phenomenon, such that the phenomenon cannot manifest anymore. In Craver’s example, researchers lesioned test animals’ eyes and observed altered development of brain cortex in absence of visual stimulation. I argue that these experiments are of limited utility, since such changes to the system will inevitably affect the occurrence of many phenomena exhibited by the system. When changes in the operation of the system’s components are observed, it is then impossible to match the observed changes to the disruption of the phenomenon of interest as opposed to any of the other phenomena disrupted. For instance, the experiment above disrupts the phenomena of object recognition, as well as the phenomenon of pupil dilation, and many others. Craver claims that these experiments rarely occur. I conclude that even when they do occur they are not used as inter-level experiments to infer constitutive relevance relations, but for merely exploratory purposes.
A type of top-down inhibitory experiments not discussed in the literature are what I call cessation experiments. They involve manipulating the conditions in which the system operates so that the phenomenon ceases. I argue that this is significantly different from those activation experiments, in which a group of systems exhibiting the phenomenon are compared to a group of subjects which do not exhibit the phenomenon. The difference arises because cessation experiments provide information about what would happen if a phenomenon were made to stop. This can differ from what happens if a phenomenon is not exhibited at all, for example in pathological cases. Furthermore, cessation experiments can be useful as an alternative to activation experiments for phenomena with unknown, or highly disjunctive set-up conditions. For instance, cessation experiments can be used to investigate the mechanisms behind involuntary cognitions in healthy subjects. I discuss the case of involuntary musical imagery, which can be triggered in a variety of circumstances, but not reliably, and with significant intersubjective differences. This makes activation experiments on involuntary musical imagery impractical, since the phenomenon cannot be reliably induced. However, once the phenomenon manifests, it tends to persist for some time, and a number of intervention techniques are being developed for stopping the involuntary imagery. If these are combined with an imaging paradigm, a cessation experiment can be performed.
18:30
Rebecca Jackson (Indiana University - Bloomington, United States)
Sending Knowns into the Unknown: Towards an Account of Positive Controls in Experimentation
ABSTRACT. Although experimental inquiry has received due attention in the last 30 years or so, surprisingly little has been written specifically about the history and philosophy of experimental controls. This stands in spite of the obvious methodological importance of controls to scientists themselves, both within their own work and their role as peer reviewers.[1] Positive controls and negative controls are used in a variety of different ways across scientific disciplines, despite there being no consensus on just what a “positive” or “negative” control is, nor what epistemic function these controls serve for the experimenter.
This paper offers a preliminary analysis of how positive controls function in experimental practice, using historical and contemporary examples from medical and life sciences. I approach this topic from a philosophy of measurement perspective. My aim is to generate a typology according to the epistemic role played by controls in the measuring process within the experiment or assay. This perspective provides a meaningful distinction between positive and negative controls, from which I further construct a finer-grained typology of two types of positive controls, "extrinsic controls" and "intrinsic controls."
In the first part of the paper, I show that extrinsic positive controls are used in systems of comparison; a trial with a known intervention is conducted in order to compare results with that of an unknown intervention -- often, in tandem with a negative control (absence of intervention). This creates a scale of comparison for meaningfully interpreting results from the unknown intervention, which is performed in a separate experimental environment. Extrinsic controls are used for making sense out of the experimental data we get at the end of the day.
In the second part of the paper, I show that intrinsic positive controls play a very different role: ensuring that the experiment itself is operating as expected. They can be used for calibration of the experimental conditions, validation of instruments or indicators, and debugging during the design of the experiment or assay. As such, intrinsic controls share an experimental environment (or some aspects of the experimental environment) with the independent variable under study. Thus, intrinsic controls offer information within the long, tedious process of experimental design and offer assurance that the final experiment is operating under safe assumptions, particularly to ensure against false negatives.
While extrinsic and intrinsic positive controls are both used to aid our interpretation of experimental results, intrinsic controls play a role in experimental design, whereas the epistemic role of extrinsic controls begins after the experiment ends. In my analysis of intrinsic controls, I employ examples from microbiology, as a field whose subject matter forces special attention to careful experimental design. In the concluding section, I argue that the typology developed in this paper is general enough to apply to cases outside of life sciences, and has the potential to inform discussions on how to improve both internal and external validity for studies in social sciences.
[1] Schickore, Jutta. “The structure and function of experimental control in the life sciences,” Philosophy of Science (Forthcoming).
ABSTRACT. The practice of Computer Science is dominated by various processes or devices of abstraction. Many these devices are built into specification and programming languages. Indeed, they are the mechanisms of language design, and the process of abstraction maybe seen as generating new languages from given ones. Our objective in this paper is to provide a logical analysis of such abstraction.
Contemporary research in mathematical abstraction has been inspired by Frege(1884). However, this work, that generally goes by the name the way of abstraction, has mostly been aimed at classical logic and mathematics where the ultimate goal has been to abstract the axioms of Zermelo-Fraenkel set theory. Little work has been aimed at other foundational frameworks such as type theory, the central carrier of computational abstraction. Our intention is to explore how the way of abstraction may provide a foundational framework for the latter.
Frege observes that many of the singular terms that appear to refer to abstract entities are formed by means of functional expressions. While many singular terms formed by means of such expressions denote ordinary concrete objects, the functional
terms that pick out abstract entities are distinctive in the sense that they are picked out by functional expression of the following forms.
(1) The direction of a line A =The direction of line B iff A||B.
where AjjB asserts that A is parallel to B. Inspired by examples such as this, an
abstraction principle may be formulated as a universally quantified bi-conditional
of the following form:
8x:8y:f (x) = f(y) $ R(x; y)
where x and y are variables of some logical language, f is a term forming operator
and R is an equivalence relation.
In general, the abstractionist views such abstraction principles as introducing a new kind of object. For example, (1) introduces the kind of things that are directions. But notice that these abstractions also require a source domain on which the abstraction acts. For example, in the case of directions the domain of
lines is presupposed. Such an approach to the development of type theories views
abstraction as a way of creating new languages and type theories from existing ones.
However, the way of abstraction does not just postulate a given class of types and
type constructors. Abstraction offers a dynamic for the introduction of new types
and type constructors that allows us to explore the rich and varied notions of type
that are to be found in modern computer science. The only restriction is that these
new types emerge via the way of abstraction. So that the way of abstraction now
takes the following shape.
8x : T:8y : T:(f(x) =T=R f(y)) $ R(x; y)
where T is an existing type, R is a relation determined by some well-formed formulae
of the formal language, T=R is the new type formed by elements of the form f(t)
where t : T, and =T=R the equality for the new type. This is exactly the set-up
required to describe computational abstraction.
The distinctive feature of successful abstraction is that every question about the
abstracts is interpreted as a question about the entities on which we abstracted.
This yields a conservative extension result for the new theory that provides an
implementation of the new theory in the old one.
ABSTRACT. The point of departure of our research is María Manzano's paper “Formalización en Teoría de Tipos del predicado de existencia conceptual de Mario Bunge” (Manzano 1985). We recall the main concepts of this article and propose new perspectives on existence offered by a wide variety of new formal languages.
Firstly, we place Bunge's ideas within the historical debate about existence. It seems to us that Bunge is in favor of combining together the traditional view on existence, where it was considered a first-order predicate, with the Fregean account, where existence acts as a second-order predicate.
In the second place, as in Manzano (1985), we make use of the language of Type Theory, TT, to formulate Bunge's distinction between the logical concept of existence and the ontological one. Both the quantifier and the ontological existence are predicates in TT, but to formulate the first one we need only logical constants while for the second one we need non-logical constants. In particular, the existential quantifier could be introduced by definition, using the lambda operator and a logical predicate constant.
Thirdly, we explore another possibility and try to incorporate in the formal system the tools needed to define the ontological existence predicate using only logical constants. In Hybrid Partial Type Theory, HPTT, assuming a semantics with various domains, the predicate of existence can be defined by means of the existential quantifier.
Since a modal model contains many possible worlds, the previous formula could be true at a world (for instance, the world of physical objects) but false at another world of the same structure (for instance, the world of conceptual objects).
Moreover, thanks to the machinery of hybrid logic we have enhanced our formal system with nominals, such as i, and with satisfaction operators, @. Nominals give us the possibility of naming worlds and satisfaction operators allow us to formalize that a statement is true at a given possible world. In this logic, we have formulae that could be used to express that the individual object named by the term t exists at the world of physical objects named by i.
In HPTT, we could use the existential quantifier, the equality and the satisfaction operator to express that an object has ontological existence, either physical or conceptual. We do not need specific non-logical predicate constants given that the satisfaction operator is forcing the formula to be evaluated at i-world.
Lastly, we analyze existence in the language of our Intensional Hybrid Partial Type Theory, IHPTT. This opens a new possibility concerning existence which we have not taken into account so far. It is related with considering existence as a predicate of intensions. In our IHPTT, existence can also be predicated of intensions, and we should expand our previous definition to include terms of type (a, s).
Our formal languages have tools for dealing with existence as a predicate and also as a quantifier. In fact, it is possible to give a coherent account of both alternatives. Therefore, from the point of view of the logical systems we have presented in this paper, the relevant issue is that we have tools for dealing with Bunge's distinctions in a variety of forms.
We have shown that hybridization and intensionality can serve as unifying tools in the areas involved in this research; namely, Logic, Philosophy of Science and Linguistics.
Mario Bunge and the Enlightenment Project in Science Education
ABSTRACT. The unifying theme of Bunge’s life and research is the constant and vigorous advancement of the eighteenth-century Enlightenment project; and energetic criticism of cultural and academic movements that reject the principles of the project or devalue its historical and contemporary value. Bunge is unashamedly a defender of the Enlightenment, while over the past half-century, many intellectuals, academics, educators, and social critics have either rejected it outright or compromised its core to such an extent that it can barely give direction to the kinds of personal, philosophical, political or educational issues that historically it had so clearly and usefully addressed. In many quarters, including educational ones, the very expression ‘the Enlightenment’ is derogatory and its advancement is thought misguided and discredited.
This paper begins by noting the importance of debates in science education that hinge upon support for or rejection of the Enlightenment project. It then distinguishes the historic eighteenth-century Enlightenment from its articulation and working out in the Enlightenment project; details Mario Bunge’s and others’ summation of the core principles of the Enlightenment; and fleshes out the educational project of the Enlightenment by reference to the works of John Locke, Joseph Priestley, Ernst Mach, Philipp Frank and Herbert Feigl. It indicates commonalities between the Enlightenment education project and that of the liberal education movement, and for both projects it points to the need to appreciate history and philosophy of science.
Modern science is based on Enlightenment-grounded commitments: the importance of evidence; rejection of simple authority, especially non-scientific authority, as the arbiter of knowledge claims; a preparedness to change opinions and theories; a fundamental openness to participation in science regardless of gender, class, race or religion; recognizing the inter-dependence of disciplines; and pursuing knowledge for advancement of personal and social welfare. All of this needs to be manifest in science education, along with a willingness to resist the imposition of political, religious and ideological pressures on curriculum development, textbook choice and pedagogy.
Defense of the Enlightenment tradition requires serious philosophical work. Questions of epistemology concerning the objective knowability of the world, questions of ontology concerning the constitution of the world, specifically regarding methodological and ontological naturalism, questions of methodology concerning theory appraisal and evaluation, and the limits, if any, of scientism, questions of ethics concerning the role of values in science all need to be fleshed out, and Enlightenment answers defended against their many critics
That Enlightenment banner continues to be carried by Mario Bunge. He champions Enlightenment principles, adjusts them, and adds to them. In Latin America of the mid- and late twentieth century, he was one of the outstanding Enlightenment figures, and has been the same in the wider international academic community.
Are transparency and representativeness of values hampering scientific pluralism?
ABSTRACT. It is increasingly accepted among philosophers of science that values, including so-called nonepistemic values, influence the scientific process. Values may, i.a., play a role in data collection (Zahle 2018), measurement procedures (Reiss 2013), evaluating evidence (Douglas 2009), and in choosing among scientific models (Potochnik 2012). The next question is: under what conditions is the influence of these values justifiable? Kevin Elliott (2017) highlights three conditions, namely value influences should be (1) made transparent, (2) representative of our major social and ethical priorities, and (3) scrutinized through engagement between different stakeholders.
In this paper, I scrutinize Elliott’s conditions (1) and (2). The first condition, transparency, is present in many accounts considering how to deal with values in science. Requiring transparency brings a range of benefits, but also has its drawbacks, in particular in relation to fostering scientific pluralism, as I will argue, analysing what the aggregate effects of a transparency demand might be. Elliott’s second condition, representativeness, might help us in answering which and/or whose values scientists should (justifiably) use. This condition can be considered exemplary of the view that values used in the scientific process should be democratically legitimate – where “democratically” can obviously be understood in several ways. An alternative view is to consider some values bannable or incorrect, while other values might stand the test of being held accountable to the facts – a view that presupposes some values used in the scientific process are objectively (in)correct. A third view might give scientists space to choose the values they prefer themselves, choices that could be disclosed following a transparency requirement.
Taking into account the different epistemic interests in science we encounter among different stakeholders, I have argued before that we should endorse scientific pluralism. Different groups of stakeholders might be interested in answering different types of questions and scientists might try to answer those questions at a level that is not only cognitively accessible, but also representing the phenomena under question in ways sensitive to the stakeholders’ interests. How would this scientific pluralism fare if – following Elliott’s condition of representativeness – only “our major social and ethical priorities” are being addressed? Would any space been left for approaches critical of major mainstream approaches? Or would Elliott rather envision a science where representativeness implies an alignment of values between certain scientific communities and certain stakeholders? Would that not eventually result in a number of isolated research approaches (each consisting of a particular scientific community teamed up with likeminded stakeholders) not communicating or engaging any longer with other research approaches as any scientific disagreement might be explained away by a difference in value judgment or in epistemic interest? Questions abound once we interrogate Elliott’s conditions of representativeness and transparency in relation to scientific practice as I illustrate using case-studies from economics and political science (both to show how Elliott’s conditions would play out in these disciplines as well as to compare Elliott’s conditions with the conditions/devices these disciplines have developed themselves in order to deal with values in their discipline).
In short, I aim to provide a thorough analysis of the strengths and weaknesses of Elliott’s first two conditions, how they would affect scientific pluralism and what it teaches us about the relations between science and democracy.
References.
Douglas, Heather. 2009. Science, Policy, and the Value-Free Ideal. University of Pittsburgh Press.
Elliott, Kevin. 2017. A Tapestry of Values: An Introduction to Values in Science. Oxford University Press.
Potochnik, Angela. 2012. “Feminist Implications of Model-Based Science.” Studies in History and Philosophy of Science 43: 383–89.
Reiss, Julian. 2013. Philosophy of Economics: A Contemporary Introduction. Routledge.
Zahle, Julie. 2018. "Values and Data Collection in Social Research." Philosophy of Science 85: 144-163.
18:30
Karim Bschir (University of St. Gallen, Switzerland)
Corporate Funding of Public Research: A Feyerabendian Perspective
ABSTRACT. Corporate funding of research at public universities raises several issues concerning conflicts of interest, transparency, and bias. Paul Feyerabend’s controversial book "Science in a Free Society" (1978) contains two seemingly incompatible statements that are both relevant for the question of how to organize research funding. On the one hand, Feyerabend famously argued that decisions about the uses of tax money for research must be reached in a democratic and participatory way. Expert opinions may be considered, but the last word about the choice of research topics must stay with the tax payers. In another passage of the book, Feyerabend pleas for a strict separation between science and the state in analogy to the separation between the state and the church. I call this the separation claim. The separation claim, by implication, precludes the possibility of using tax money for research.
In this paper, I apply Feyerabend’s ideas on research funding to a public controversy that took place four years ago in Switzerland. After a Swiss newspaper had revealed a secret contract between the University of Zurich and the bank UBS, in which the bank offered the university 100 Million Swiss Francs for the establishment of an international research center for economics, a group of Swiss academics launched an international petition for the protection of academic independence. The online petition was signed by over 1000 academics within a couple of days. The signees called on “leaders of the universities and all who bear responsibility for our educational institutions, at home and abroad, to safeguard the precious heritage of free and independent scholarship, and to avoid endangering the academic ethos in controversial collaborations.” The management of the university reacted to the petition by declaring that third-party funding was a necessity for a university that wants to engage in intentionally competitive research, and that corporate sponsoring was no threat to academic independence. Eventually, the research center was established, and the university was forced to disclose the contract.
The goal of the paper is to analyze the problem of corporate research funding from a Feyerabendian perspective, and to apply Feyerabend’s arguments to the UBS case. I show how the alleged tension between Feyerabend’s claim for democratically controlled public funding and the separation claim can be resolved by pointing at the fact that the aim of the separation claim is to achieve a pluralism of traditions, and not the abolishment of state funded research. Public as well private funding schemes are permissible in Feyerabend’s account as long as there is no monopoly of experts, who decide about the contents of research. I conclude that "Science in a Free Society" provides no arguments against corporate research funding, and that from a Feyerabendian perspective, private funding does not pose a threat to academic freedom and independence. Quite on the contrary, a diversity of funding sources can be seen as a beneficial factor for a plurality of methodological and theoretical approaches at public research institutions.
ABSTRACT. The objective of this paper is to analyze the possibility of finding in the TV series a form of democratization or, at least, of divulgation of scientific knowledge. To develop this idea, we will focus on the proposal of democratization of science of Philip Kitcher and we will show this democratizing capacity of TV series through the case of Black mirror.
Kitcher (2001) proposes to us to think of ways in which alternative institutions can allow us to arrive at their ideal of "well-ordered science”. This model pretends to establish certain moral and political norms so that the scientific options are the result of an ideal of conversation or deliberation. In this sense, the TV series can be understood as a part or element of the information and education system that Kitcher talks about, that is, these can create narrations and transmit the content that should be disclosed by the scientific or political community itself.
In recent years, has emerged a new panorama of television audience. TV Series as Black Mirror, Westworld or Philip K. Dick's Electric Dreams, among other, have incentivated a sophisticated viewer profile, which is interested in generic innovation, current themes and dystopian and scientific plots. The impact generated by this type of series in the viewer makes him reflect on the direction of scientific and technological developments. Through the case of Black Mirror, we will analyze how this kind of media product let to the spectator the opportunity to reflect on topics that are usually not familiar, such as the technological development linked to transhumanism or the moral problems that arise from scientific development.
In conclusion, this paper aims to show that through the different plots projected in TV series, the public can understand scientific controversies and incorporate the needs of others in the reflection on scientific and technological development.
References
Barceló García, M. (2005), “Ciencia y ciencia ficción”, Revista Digital Universitaria, vol. 6, no 7.
Cascajosa, C. (2016), La cultura de las series, Barcelona: Laertes.
Gaillard, M. (2013) The Gobernance of “Well-ordered Science” from Ideal Conversation to Punlic Debate, en Theoria: An International Journal for Theor, History and Foundations of Science, SEGUNDA EPOCA, Vol. 28, no2 (77), pp. 245-256.
Kitcher, Ph. (1993) The advancement of science, Oxford: Oxford University Press. Kitcher, Ph. (2001) Science, Truth and Democracy, Oxford: Oxford University Press.
Kitcher, Ph. (2010), "Science in a Democratic Society", en González, W.J. (ed), Scientific Realism and Democratic Societies: The Philosophy of Philip Kitcher, Poznan Studies in the Philosophy of Science and the Humanities, Rodopi, Amsterdam.
Kitcher, Ph. (2011) Science in a Democratic Society, Amherst, NY: Prometheus Books.
Rethinking the given. Sellars on the first principles.
ABSTRACT. When Wilfrid Sellars undertook in his Empiricism and the Philosophy of the Mind the critique of the Myth of the given, he could hardly imagine that after a few years the terms and concepts which he introduced would be as intensely discussed as they are today. Contemporary epistemology not only discusses what the Myth of the given might be, but also it refers to the various mythologies that have emerged around the given. It has even come to regard the Myth of the given as a legend.
Although it is evident that a lot of literature has been generated around the given, there are relatively few studies that have addressed what exactly is the given. Right at the beginning of the above-mentioned work, Sellars points out that the given is said of the sensory contents, the universals, the propositions, the necessary synthetic connections, the material objects, the first principles, the given itself and the real connections. Notwithstanding, the studies which have been done about the given has reduced drastically its extension.
In particular, the given has been fundamentally thought from two aspects: as the ground of empirical knowledge or as the basis of logico-mathematical knowledge. The first line of research is represented by John McDowell. The second, by Robert Brandom. Neither of these two schools of interpretation analyzes the concept of the given assuming the extension that Sellars gave him. The analysis of what is not empirical or logical-mathematical but that is still part of the given seems to be somewhat misplaced.
In this paper, I will deepen into Sellars' analysis of the given to introduce the criticism that can be gathered from it in the field or study of the first principles. These can be thought as heuristic principles or as some type of presupposition. Nevertheless, Sellar’s criticism concerns anything that is thought to be admitted or supposed without being the result of a clear inferential process: anything that "must be admitted” with independence of how our concepts are applied (i.e.: the falsity of skepticism, the existence of the outside world, a minimum principle of the substance, etc.) constitutes a form of the given.
Consecuently, methodology, general epistemology of science and metaphysics of science would have to be concerned. These disciplines usually presuppose that the world works in a determined way. This determinated way implies o presuppose what tradition has called ‘first principles’. In this presupposition we find the first principles of which I will take charge. Therefore, the critique of the given focused on the first principles can easily be considered as a criticism of these disciplines.
Bibliography:
Brandom, R. (2015) From Empirism to Expresivism. England: Harvard University Press.
McDowell, J. (2009) Having the World in View. England: Harvard University Press.
Sellars. W. (1990) Science, Perception and Reality. California: Ridgeview Publishing Company.
The Problem of the Variable in Quine's Lingua Franca of the Sciences
ABSTRACT. This paper discusses the problematic use of the variable as sole denoting position in Quine’s canonical notation, specifically of singular reference. The notation is the formal theory Quine developed which was intended to be a perspicuous lingua franca for the sciences. The paper shows how singular objects that are necessarily determined in the notation are nonetheless necessarily denoted indeterminately, creating a formal paradox.
Quine’s notation is an adaptation of the formal logic of Russell’s theory of denotation. The paper begins by explaining how Quine’s wider ontological views led him to make reductions to Russell’s original theory in two ways. Firstly he held the sparse ontological view that being is strictly existence. Consequently, that only objects that exist at a particular point in space-time should be expressible within the lingua franca i.e., what Quine considered to be the empirical objects of the sciences. He thereby excluded universal objects from his lingua franca.
Secondly, due to the specific form of empirical linguistic relativism that Quine held, he only allowed objects in his lingua franca to be determined through categorisation into sets. This Quine achieved formally by solely allowing objects to be determined through predication, not reference. To do so, Quine disallowed any form of constant in reference-position. The only way to denote an object in Quine’s lingua franca therefore is through a variable quantified over existentially.1 As Quine famously put it, “to be, is purely and simply, to be the value of a variable.”2
The paper proceeds to explain the problem with said reductions. Briefly put, in the first-order propositional logic of a formal theory of reference such as that which Quine adopts, the variable occupies referential-position to represent indeterminate reference. In the case of singular reference, the variable indeterminately denotes one value, i.e., one object. But the variable is preliminary to determination. That is, once the necessary and sufficient predication has been given, the denoted object has been fully determined. At this point the variable is necessarily replaceable by a constant, as what is denoted is no longer indeterminate. But in Quine’s lingua franca, the variable can never be replaced by a constant. It would seem that in all such cases of singular denotation, the variable, per its very use in formal logic, indeterminately denotes, yet at the same time the object denoted is necessarily determined.
Formally speaking, such a paradox is damaging to the very purpose of the lingua franca: to express objects clearly and consistently across the sciences. Additionally it becomes difficult to reconcile form with content. That is, singular, determinate objects that exist at a precise point in space-time are exclusively expressed by means of a formal paradox. The paper concludes that, due to Quine’s ontological commitments, objects as expressed in his lingua franca cannot successfully be reduced purely and simply to the value of a variable without paradoxical results.
1. See W.V.O Quine, Word and Object, 179.
2. W.V.O Quine, “On What There Is”, (Philosophy Education Society Inc. The Review of Metaphysics: Vol. 2, No. 5: Sep., 1948:31).
Ranjan Mukhopadhyay (Associate Professor in Philosophy,Visva-Bharati,Santiniketan, India)
Natural Deduction Rules as Means of Knowing
ABSTRACT. Does deductive reasoning has any capability to provide new knowledge? This doubt is prompted by a perceived tension between the conditions for validity of a deductive argument and the conditions for the argument’s ability to deliver new knowledge. Validity requires the conditions of truth of the premises of the argument to be included in the conditions of the truth of the conclusion. So, in case of a valid deductive argument when the premises are true the conclusion is also thereby true. On the other hand, for the argument to deliver new knowledge the conclusion has to carry some information which is perceivably not carried by the premises – in which case the truth of the premises is not a guarantee for the truth of the conclusion. This condition for delivering novelty seems to go against the condition for validity.
One resolution to this lies in seeing that there is a difference between truth and knowledge of truth. Truth of premises, in a valid case, also provides truth of the conclusion; but knowledge of the truth of premises does not necessarily provide knowledge of the truth of the conclusion. This answer, however, commits oneself to a realist theory of truth and meaning: meaning of a sentence is its truth-conditions, and the truth-conditions are constituted by relevant objects and their properties irrespective of our knowledge of them. That is why the truth-conditions of a premise can also be the truth-conditions of the conclusion – without us being in a position to know the truth of the conclusion while knowing the truth of the premise. This because the (accredited) means of knowing the truth of the premise may not be the (accredited) means of knowing the truth of the conclusion. New knowledge through the validly available conclusion can now be seen to be forced upon us as a decision to accept the truth of the conclusion.
This paper explores an alternative by looking at the role of rules of inference as formulated, say, in terms of the natural deduction rules (cf. Dag Prawitz, Natural Deduction, Almquist and Wiksell, Stockholm, 1965, p. 20). The natural deduction rules formulated for the individual logical constants can be seen to be not only rules (permissions) for recording validly inferable conclusions from premises, but also as rules (permissions) for proving (knowing) sentences using the constants in question. The rules then are also the (only) accredited means of knowing (proving) (the truth of) sentences using the respective constants. No other means of knowing the truth of sentences using the constants are recognized as available in the language. This understanding of rules accommodate capability of delivering novelty along with validity.
But here is a caveat. Natural deduction systems and their cognates like sequent calculus, etc., are studied with proof-theoretic motivations. Questions about the proof-theoretic justification of such rules (conditions of uniqueness, conservative extension/harmony, and stability) lead to the prospect of having intuitionistic logic as the justified logic – which in turn tells us to abandon realism.
The Ontology of Mass and Energy in Special Relativistic Particle Dynamics
ABSTRACT. This paper develops a new account of the relationship between mass and energy in special relativistic particle dynamics. Marc Lange has recently argued that the standard picture of the mass--energy relationship is problematic, and proposed that the alleged conversion between energy and mass is unphysical and merely `perspectival'. I reject both the traditional account and Lange's account in this paper. I consider various explanatory puzzles associated with the interpretive role of Lorentz invariance, and I develop a new ontology of special relativistic particle dynamics derived from the geometry of Minkowski spacetime, according to which genuine cases of mass--energy conversation are fundamentally coordinate-dependent ways of characterizing the interactions between impure 4-forces and massive bodies.
ABSTRACT. According to substantivalism spacetime is an autonomous entity. According to the relational view, only material objects really exist, while space and time are nothing else but the spatial and temporal relations among them.
1. The void argument. The following argument, which could be called the “void argument”, stays in support of relationalism:
“Let us imagine that all material structures are taken out of space. What would remain then is an empty space. It is a void deprived of matter, and to this effect it turns into an imaginary thing that displays no property of its own. But a thing without any properties is rather nothing than something. So, space does not exist as an entity of its own.”
An objection to the void argument would be a claim that empty space is not deprived of qualities. Let us turn to the figure of a triangle. The sum of the inner angles of every triangle drawn on a plane equals 180 degrees. However, if it is found on the surface of a sphere, this sum is more than 180 degrees. Thus basic properties of triangles may differ, when they reside within different spaces. And this is due to the differences of spaces themselves.
A counter objection may be raised that the qualities of a geometrical space do support a view about some autonomy of geometrical spaces, but not the stronger thesis about the substantive character of the empty physical space (spacetime).
If one presumes that what is taken to be the physical space is correctly represented geometrically by the flat three-dimensional Euclidean space, then the void argument might go well. But the quality of a physical space to be either flat, or to possess a kind of curvature, can affect the motion of material objects. Thus the void argument may well be rejected.
2 The Energy of the empty space-time. If space-time were relational, it could not possess non-relational properties, which are possessed by material systems. But spacetime really exhibits such a property that has obtained the popular name “dark energy”. It is ruling the expansion of the Universe and cosmologists refer it to spacetime itself. Dark energy opposes the effect of mutual attraction among stars and galaxies due to the universal gravitation.
If the nature of space-time were relational, then spacetime could hardly possess such an intrinsic dynamic quality. Energy is a fundamental property of material systems, and they have an existence of their own. So, we must concede that spacetime has also a specific existence of its own.
3. Gravitational waves. Space-time demonstrates yet another feature, which would hardly be conceivable for it to possess, if its nature were relational.
One hundred years after Einstein hypothesized their existence, a large group of scientists announced the first observation of gravitational waves. Relationalism could hardly account for their existence. Relations are relational properties of objects, and as being dependent on the specific configuration among them, they have no existence of their own. But if so, relations cannot possess non-relational properties on their part, and in particular, spacetime could not initiate gravitational waves.
19:00
Thomas Benda (National Yang Ming University, Taiwan)
Change, temporal anticipation, and relativistic spacetime
ABSTRACT. In science, usually the B-view of time (according to McTaggart's famous classification) is adopted, which speaks of a static, tenseless temporal set-up. Contrasted with it is the A-view, which characterizes time by mere unstructured present, future and past. With the B-view, time is neatly integrated in four-dimensional spacetime and thus obtains a similar character as space. The notions of the present and of an intentional directedness towards a future are not part of that picture, If they are to matter, they are hoped to be derivable from the set-up of the spacetime manifold.
But several concepts which are crucial for our view of the world are foreign to the eternalist B-view and it is hard to see how they can emerge from it. First, temporal directedness cannot be based on physical laws due to the temporal symmetry of microphysics. So, commonly, emergence of temporal directedness is postulated and commonly pushed up the time scale towards sensorimotor integrative processes and the development of an internal narrative, which happen within tenths of seconds, and several seconds, respectively. Those processes, however, can be interpreted as they are only if the mind who integrates and narrates already possesses a concept of temporal anticipation. Similarly, free will, even if epiphenomenal due to the famous Libet experiments, as a notion still requires a forward-looking mental perspective. Further on, immediate perception of succession and of movement, which guides our interpreting of static series if images, requires a pre-existing concept of a present that is more than indexical as well as of temporal directedness. Finally, our indispensable concept of change remains unaccounted for by representing time as a series of time points that is not well-ordered. Even a well-ordered series mimicks progress only for the mind that has an a priori notion of change. Above four concepts mesh nicely with McTaggart's A-view of time. We have them, yet having them provides us with no advantage for survival or procreation.
We postulate:
(P) Concepts which we are strongly inclined to have and which nevertheless do not grant us any biological evolutionary advantage ought to be taken metaphysically serious.
With (P), the B-view of time turns out to be incomplete in important aspects. That suggests adopting of a purified version of McTaggart's A-view, according to which time is a parameter of change, an abstraction from the intentional forwardness of our thoughts. The B-view of time is not dismissed. Instead, (relativistic) spacetime is regarded as ontologically secondary to a set of temporal continua. Those continua ("worldlines") have directions ("future", "past") and are able mutually to intersect at earlier and later points. Intersection points define spacetime points. An axiomatic formal system characterizes intersections and leads to a construction of (relativistic) spacetime on the basis of continuous time.
References:
Benda, Thomas (2008). “A Formal Construction of the Spacetime Manifold.” Journal of Philosophical Logic, vol. 37, pp. 441-478.
Benda, Thomas (2017). “An abstract ontology of spacetime”. In M. Giovanelli, A. Stefanov (eds.), General Relativity 1916 - 2016, pp. 115-130. Montreal: Minkowski Institute Press.
Scientific ways of worldmaking. Considerations on philosophy of Biology from Goodman’s theory of worlds
ABSTRACT. The reflection on the worldviews (Weltanschauung) has been very fruitful in several philosophical trends of the 20th century. In analytical philosophy has been treated by various authors and from different points of view, as sociological (L. Fleck and Th. S. Kuhn), logical (R. Carnap) or more metaphysical points of view (W. Sellars), among many others. However, in contrast to the notion of the worldview, the concept of worldmaking is more recent and N. Goodman can be placed at the centre of the discussion.
One of Goodman's main tasks in Ways of Worldmaking (1978) is the defence of a far-reaching epistemological pluralism. His suggestion is an epistemological constructivism in which we do not have only different worlds, but also different ways of describing those worlds. In his work, Goodman focus several chapters on the construction of worlds in painting, music and language (oral and written), but he says not so much on the role sciences can play in the construction of a world.
My contribution in this communication is double. First, I will comment on Goodman's general theory of the worldmaking to see if a distinction can be made between concepts as «world», «description of the world» and «frame of reference» or if, on the contrary, they are impossible to separate. For this, I will consider L. Wittgenstein On Certainty (1969) and Cassirer's notion of symbol as it is collected by Goodman.
Secondly, I will analyse the possibility of building scientifically a world from Goodman’s approach. I will focus on genetics in philosophy of biology, considering authors as E. Mayr and E. Fox Keller. I will show how the processes of worldmaking that Goodman refers to as «composition», «weighting» and «supplementation» can be found in the evolution of the gene concept.
As we know, Goodman's position has been widely criticized (P. Boghossian). However, I believe that is still relevant to all philosophy of science that considers certain perspectivism. Therefore, I am going to defend the relevance of maintaining Goodman's theory for the philosophical analysis of the scientific constructions of worlds.
BOGHOSSIAN, P. (2006): Fear of knowledge. Against relativism and constructivism. Oxford University Press, Oxford.
FOX KELLER, E. (1995): Refiguring life: metaphors of twentieth-century biology. Columbia University Press, New York.
—— (2000): The century of the gene. Harvard University Press, Cambridge.
GOODMAN, N. (1978): Ways of worldmaking. Hackett Publishing Company, Cambridge.
MAYR, E. (1982): The growth of biological thought. Diversity, evolution and inheritance. Harvard University Press, Cambridge.
Objectivity as Mind-Independence – Integrating Scientific and Moral Realism
ABSTRACT. In general, philosophy of science and metaethics are second-order disciplines, whose subject is the status of their respective first-order discipline – science or normative ethics (see Losee 2001: 2; Miller 2013: 1). In particular, the debates on scientific and moral realism reflect this interdisciplinary parallel insofar as proponents often conceptualize these two views as applications of the same generic realism to different domains (Boyd 1988: 182; Brink 1984: 111). Given this systematic background, it should not be surprising that, for instance, Richard Boyd vindicates scientific as well as moral realism and particularly attempts to transfer insights from philosophy of science to metaethics, and vice versa.
However, recent developments in both debates challenge this attempt of unification. Whereas Boyd’s unified realism is based on his naturalist assumption that ethics is part of the empirical sciences, today’s proponents tend to advocate a non-naturalist version of moral realism, thus depriving Boyd’s approach of its foundation (see, for instance, Shafer-Landau 2003: 68, 72ff). Consequently, if we seek to preserve the opportunity to exchange insights between the two debates, we must look for an alternative connection between both views.
In my talk, I claim that the common ground of scientific and moral realism is their commitment to the objectivity of scientific statements or moral judgments, which, apart from semantic and epistemic components, unfolds in the metaphysical thesis that scientific as well as moral truths obtain mind-independently. I show that this conceptualization allows us, first, to include non-naturalist versions of moral realism and, second, to notice similarities between the arguments proposed for each realism. Especially, David Enoch proposes an indispensability argument for a robust version of moral realism that is supposed to be similar to the no-miracles argument for scientific realism (see Enoch 2011: 50, 55ff). While we are justified to believe in the existence of electrons, because electrons are explanatorily indispensable for the success of science, we are, according to Enoch, also justified to believe in the existence of moral norms, because they are deliberatively indispensable for our practice of exchanging reasons in ethical discussions.
As a result, philosophers of science are supposed not to exaggerate the epistemic component of scientific realism, namely that we have good reasons to believe that our most successful theories are (at least approximately) true. Rather, they should focus on the metaphysical core thesis about objectivity and mind-independence, which is central for every realist position in philosophy – particularly, for both scientific and moral realism (see, for instance, Sankey 2008; Godfrey-Smith 2003: 174, 177).
Literature
Boyd, Richard (1988): “How to Be a Moral Realist”. In Geoffrey Sayre-McCord (Ed.): Essays on Moral Realism. Ithaca 1988, 181-228
Brink, David Owen (1984): “Moral Realism and Sceptical Arguments from Disagreement and Queerness”. Australian Journal of Philosophy 62, 111-125
Enoch, David (2011): Taking Morality Seriously. Oxford 2011
Godfrey-Smith, Peter (2003): Theory and Reality – An Introduction to the Philosophy of Science. Chicago 2003
Losee, John (2001): A Historical Introduction to the Philosophy of Science. Oxford 2001
Miller, Alexander (2013): Contemporary Metaethics – An Introduction. Cambridge 2013
Sankey, Howard (2008): Scientific Realism and the Rationality of Science. New York 2008
Shafer-Landau (2003): Moral Realism – A Defence. Oxford 2003
Free Will and the Ability to Change Laws of Nature
ABSTRACT. Peter van Inwagen argued that we are never able to change laws of nature, i.e., the laws of physics (van Inwagen, 1983, pp. 61–62). According to him, while we have the power to make some true propositions false, we do not have any power to falsify laws of nature. That is, in a case where it seems that we can change the truth-value of a law of nature, that proposition is not a real law of nature. The idea that we can never have the ability to change laws of nature plays an important role in the consequence argument, which aims to show that free will as the ability to do otherwise is incompatible with determinism. One premise of this argument is the sentence NL, which roughly means that the conjunction of all laws of nature holds and that no one has, or ever had, any choice about whether this conjunction holds (cf. van Inwagen, 1983, p. 93). Thus, whether or not we can change the laws of nature has been the important topic in the philosophical discussion about free will.
Some leeway compatibilists, who believe that free will as the ability to do otherwise is compatible with determinism, oppose this premise. Bernard Berofsky is one such compatibilist and claims that since psychological laws cannot be reduced to or supervene upon physical laws, the premise that we can’t change the laws of nature is false (Berofsky, 2012). We have a power to disobey psychological laws relating to rational decision-making and to obey another rational strategy. We can choose laws of rationality in order to handle the imprecise information and the temporal limits of calculating skills (Berofsky, 2012, pp. 119–121). Therefore, we have free will not only as rational autonomy but also as the ability to do otherwise even in a deterministic world.
By focusing on supervenience relation between physical properties and psychological properties, I argue that Berofsky’s argument is insufficient to defend the idea that physical determinism is not an enemy of free will as an ability to do otherwise. First, I explain van Inwagen’s argument against the ability to change laws of nature and the basic formation of the consequence argument. Second, I examine Berofsky’s expanded consequence argument and his argument showing that psychological laws do not supervene upon physical laws. Third, I argue for the thesis that psychological properties supervene on physical properties and show that this supervenience relation is still a threat to the compatibility of free will as the ability to do otherwise with determinism. And finally, I address Berofsky’s and other possible counterarguments.
Main References
Berofsky, Bernard. (2012). Nature’s Challenge to Free Will. Oxford University Press.
van Inwagen, Peter. (1983). An Essay on Free Will. Oxford University Press.
Justification of Basic Inferences and Normative Freedom
ABSTRACT. In this paper, I consider some questions in the current research on the nature of logical inference, specially what is the justification for basic rules of deductive reasoning, such as modus ponens. After presenting the problem, I briefly explore Boghossian’s inferentialist solution and expose Wright’s objections. I examine the feasibility of Wright’s non-inferentialist approach on the basis of considering how a weaker notion of justification or entitlement allows us to speak about basic knowledge. Finally, I consider Broome’s dispositional analysis and the question about what kind of normativity it could be suitable in the case of basic logical knowledge.
Broome’s proposal seems a viable solution to the problems found in Wright’s approach, specifically, the difficulty of the ‘Simple Proposal’, namely, basic inferences can assume patterns that are not very solid and we sometimes make mistakes. However, unlike Broome and Boghossian, we believe that while rule-following implies inferring, inferring is not always reducible to following a rule. For Wright, the Simple Proposal contains the way in which we make an inference, considering it a mental action; but all action has a ‘directivity’, it is directed to something beyond the action itself. If we do not want to fall back into the regression to infinity produced by the recourse to intentionality, this can be understood as a guiding disposition. Finally, if we understand the inference in that way, we would have a rational warrant in a cognitive project. It is a kind of warrant that is not incompatible with knowledge, but is prior to it. Ultimately, I contend that it is necessary to take the notion of normativity in its broadest sense and stand for a certain normative freedom. The objectivity of logic is not affected if we could accept normativity ‘in a broad sense’. In the case of rules as MP, we assume MP as a correct rule of reasoning because rationality allows it, not because it requires it. The question to clarify is what special kind of reasons are rules.
References:
Boghossian, P. A. Knowledge of logic. In Boghossian P. & Peacocke C. (Eds.) New Essays on the A Priori; Oxford University Press: Oxford, U.K., 2000; pp. 229–254, ISBN-13: 9780199241279
Boghossian, P. A. Epistemic Analyticity: A Defense. Content and Justification: Philosophical Papers. Oxford University Press: New York, USA, 2008; pp. 212–220, ISBN: 978-0-19-929210-3.
Boghossian, P. A. What Is Inference? Philosophical Studies, 2014, 169, pp. 1–18.
Boghossian, P. A. How are Objective Epistemic Reasons Possible? Philosophical Studies, 2001, 106, Nº 1/2 , pp. 1–40.
Broome, J. Rationality through Reasoning. Oxford: Wiley Blackwell. 2013. ISBN: 9781118656051.
Broome, J. Comments on Boghossian, Philosophical Studies, 2014, 169, pp. 19–25.
Burge, T. Content Preservation. The Philosophical Review, 1993, 102, pp. 457–488.
Harman, G. Change in view. Cambridge: MIT Press. 1986. ISBN: 9780262580915.
Railton, P. Normative Force and Normative Freedom. In Dancy, J. (Ed.). Normativity. Ratio Special Issues. Wiley-Blackwell; 1 ed. 2000, pp. 1-33. ISBN 978-0631220411.
Wright, C. Intuition, Entitlement and the Epistemology of Logical Laws. Dialectica, 2004, 58, pp. 155–175.
Wright, C. Warrant for nothing (and foundations for free)?, Aristotelian Society Supplementary, 2004, Volume 78 (1), pp. 167–212.
Wright, C. On Basic Logical Knowledge. Philosophical Studies, 2001, 106, pp. 41–85.
Wright, C. Comments on Paul Boghossian, ‘What Is Inference?’. Philosophical Studies, 2014, 169, pp. 27–37.
ABSTRACT. In this paper we extend the probabilistic STIT logic of [1] in order to develop a logic of obligation which accounts for an agent's beliefs about what others will do and what the agent him- or herself can achieve. The account is notable in that it incorporates harms into a probabilistic STIT framework, an endeavor which to our knowledge has not been done before. One of the main aims of the paper is to develop an account of obligation which is decision-theoretic [2] (i.e. concerned with how an agent goes about trying to act for the best, and not with what really is objectively best), agent-dependent (i.e. based on the agent's limited information, reasoning abilities, and actual alternatives), and action-guiding (i.e. provides a unique conclusion to the question of what the agent ought to do).
We begin by presenting the probabilistic XSTITp logic developed in [1]. In doing so we highlight that the XSTITp logic is concerned with subjective probabilities, with what some particular agent believes is likely to happen. This is crucial for our purposes, as our aim is to model obligations from the decision-theoretic agent-dependent perspective, and therefore it is necessary that the probabilities being employed be those assigned by the agent whose choice we are examining, and not some god-like omnipotent observer. With Broersen's framework for the probabilistic XSTITp in place, we then move to our method of modeling expected harm.
We simplify our discussion by conceiving of "expected harm" as nothing more than the expected number of some concrete and discrete harms caused by some action. (More specifically, we conceive of "expected harm" as the expected number of deaths a certain action will bring about, as we aim to apply our account to the ethics of war and conflict.) This allows us to assign clear values to "harm", while also avoiding the more subtle debates about comparisons between different types of harm or between individuals. Moreover, in many areas of ethical concern (and most especially in the ethics of war and conflict), there is some discrete and concrete harm (usually number of deaths) that is generally taken to be lexically prior to all others, and is often used as a proxy for evaluating the total harm caused by some action. With our discussion limited to concrete and discrete harms, we then present a function for mapping future harms at some particular state to a notion of expected harm which relies on both the choice of the agent in question and the expected choices of all other agents within the model. This function allows one to formally model how an agent's expectations about his and others' choices will impact on his beliefs concerning the expected harms of the actions.
With our method of calculating expected harm in place, we then move onto developing a ranking of actions given some goal G. We begin by presenting Broersen's "chance of success function", where the chance of success that some G is realized is the sum of the chances the agent assigns to next possible static states validating G. Using this, and given some goal G, we can then define a deontic ranking of actions where one action is preferred to another just in case the chances of success for the former are at least as high as those of the latter (with respect to G) and the former has a lower expected harm than the latter. Put simply, one action is preferable to another if it will be at least as likely to be successful in securing our goal and will not cause more harm than the alternatives. From this ranking we then define the notion of optimality, which maintains that an action is optimal just in case there is no other alternative which has an equal or higher chance of success and is less harmful.
We conclude by showing that this notion of optimality tracks to core obligations from, most notably, the ethics of war and conflict (via the principle of necessity), but also to other ethical domains as well. Moreover, the simplification we make regarding how "harm" is to be understood allows one to easily employ this overall framework to other domains of ethical inquiry where there is some discrete outcome that is taken to be morally central.
References
[1] Broersen, J. (2011). Modeling attempt and action failure in probabilistic stit logic. In IJCAI Proceedings-International Joint Conference on Artificial Intelligence, volume 22, pages 792-797.
[2] Humberstone, I. L. (1983). The background of circumstances. Pacific Philosophical Quarterly, 64(1):19-34.
Berta Grimau (Czech Academy of Sciences (Institute of Information Theory and Automation), Czechia)
Fuzzy Semantics for Graded Adjectives
ABSTRACT. After decades of being considered unsuitable, the fuzzy approach to vagueness was reexamined and vindicated in Smith (2008) in the form of what he calls "fuzzy plurivaluationism". Roughly speaking, this approach takes, instead of a single fuzzy model, a set of various fuzzy models for the semantics of a vague predicate. While this proposal appears to have revived the interest in fuzzy logic as a tool for vagueness to some extent, its potential in formal semantics remains virtually unexplored. My aim in this talk is to vindicate and develop some aspects of the application of fuzzy plurivaluationism in the semantics of graded adjectives.
Firstly, in way of a rationale, I briefly argue that the fuzzy approach offers a desirable treatment of the Sorites Paradox.
Secondly, I respond to some general objections typically raised against it – i.e. the artificial precision objection and that of the counterintuitive results of a degree-functional approach to the logical connectives. If successful, my answers show, against common belief, that the prospects of fuzzy accounts of vagueness are not doomed from the very beginning.
In the third (and last) place, I work out some of the details of the fuzzy analysis of graded adjectives. First, I show that we have available a variety of algebraic models which give us the means to analyse various kinds of graded adjectives and the constructions where they appear. For instance, we can give models not only for vague adjectives, but also for different sorts of absolute graded adjectives (i.e. those which display degrees of applicability, but have sharp boundaries, such as "acute"). Next, I argue that the fuzzy approach would avoid some of the problems of what can be considered the mainstream take on graded adjectives – i.e. the degree-based account stemming from Cresswell (1977) and developed, for example, in Kennedy (2007). One of the advantages of the fuzzy approach is that it simplifies the analysis of the positive unmarked form (i.e. the adjectival construction found, for instance, in "The woman is tall") by simply identifying it with the corresponding adjective and thus taking it to denote a fuzzy property (by contrast, the usual approach to the positive unmarked form requires either a type-shifting mechanism or the positing of a null degree morpheme). Finally, I provide support for the claim that a fuzzy semantics can be used to analyse comparative statements successfully. In particular, I show that two alleged problems for fuzzy semantics in relation with these (i.e. the problem of comparative trichotomy and that of non-borderline comparatives) pose no real threat.
References:
Cresswell, M. J. 1977. The semantics of degree. In Montague Grammar, ed. Barbara Partee, 261–292. New York: Academic Press.
Kennedy, C. 2007. Vagueness and grammar: The semantics of relative and absolute gradable adjectives. Linguistics and Philosophy 30. 1–45.
Smith, N. (2008). Vagueness and Degrees of Truth. New York, United States: Oxford University Press.
18:30
Petr Cintula (Institute of Computer Science, Czech Academy of Sciences, Czechia) Carles Noguera (Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Czechia) Nicholas Smith (Department of Philosophy, The University of Sydney, Australia)
Formalizing the Sorites Paradox in Mathematical Fuzzy Logic
ABSTRACT. The sorites paradox has been intensively discussed in the literature and several competing theories of vagueness have emerged. Given a vague predicate F and a sequence of objects 1, 2, ..., n, such that: F(1) is true, F(n) is false, and for each i, the objects i and i+1 are extremely similar in all respects relevant to the application of F; the sorites paradox is an argument which, based on two apparently true premises F(1) and "for each i: F(i) implies F(i+1)", after n applications of modus ponens reaches the clearly false conclusion F(n).
The standard account of this phenomenon using fuzzy logic goes back to Goguen: the second premise is almost true — so for ordinary purposes we accept it — but it is not fully true and so the argument is unsound. Hajek and Novak presented an alternative formalization of the sorites paradox aimed at emancipating it from ad hoc aspects of Goguen's solution while, allegedly, preserving Goguen’s main idea.
To account for the fact that there is a small difference between the truth values of F(i) and F(i+1), they used a new unary connective to reformulate the second premise as "for each i: F(i) implies *F(i+1)" with the intended reading "if F(i) is true, then F(i+1) is almost true". An easy proof in the natural axiomatic system governing the behaviour of * shows that we can soundly derive the conclusion *^n F(n), where *^n is the consecutive application of * n-times — and this conclusion, unlike the original conclusion F(n), can be accepted as being fully true.
Any solution to the sorites paradox should be able to answer the question why we initially go along with the sorites reasoning, rather than immediately rejecting some instance of the second premise, nor claiming that there is a fallacy — and yet ultimately, when we see where the argument goes, we do not accept the conclusion, but instead begin to lose the feeling that the premises are true and the reasoning is correct.
In our talk we use a general heuristic known as the Ramsey test to show that Hajek and Novak's formalization yields a natural answer to the above question. We also observe that the formulation of the second premise as "for each i: *(Fe(i) implies Fe(i+1))" better captures Goguen's original idea, that Hajek and Novak's axiomatization of * can be used to derive from this premise a different, but also perfectly acceptable conclusion, *^(2^n) F(n), and that this conclusion can also be reached via a weaker notion of `almost true'. Finally we show that a good answer to the above question can still be given once these changes are made.
REFERENCES
J.A. Goguen. The logic of inexact concepts, Synthese 19:325-373, 1969.
R. Keefe and P. Smith (eds). Vagueness: A Reader, MIT Press, 1997.
F.P. Ramsey. General Propositions and Causality (1929) in his Philosophical Papers, D.H. Mellor (editor), Cambridge University Press, pp. 145--163, 1990.
P. Hajek and V. Novak. The sorites paradox and fuzzy logic. International Journal of General Systems 32:373-383, 2003.
ABSTRACT. When properly arithmetized, Yablo's paradox results in a set of formulas which (with local disquotation in the background) turns out consistent, but $\omega$-inconsistent. Adding either uniform disquotation or the $\omega$-rule results in inconsistency. Since the paradox involves an infinite sequence of sentences, one might think that it doesn't arise in finitary contexts. We study whether it does. It turns out that the issue turns on how the finitistic approach is formalized. On one of them, proposed by M. Mostowski, all paradoxical sentences simply fail to hold. This happens at a price: the underlying finitistic arithmetic itself is $\omega$-inconsistent. Finally, when studied in the context of a finitistic approach which preserves the truth of standard arithmetic (developed by AUTHOR), the paradox strikes back -- it does so with double force, for now inconsistency can be obtained without the use of uniform disquotation or the $\omega$-rule.