Philosophers have attempted to distinguish the Historical Sciences at least since the Neo-Kantians. The Historical Sciences attempt to infer rigorously descriptions of past events, processes, and their relations from their information preserving effects. Historical sciences infer token common causes or origins: phylogeny and evolutionary biology infer the origins of species from information preserving similarities between species, DNAs and fossils; comparative historical linguistics infers the origins of languages from information preserving aspects of exiting languages and theories about the mutation and preservation of languages in time; archaeology infers the common causes of present material remains; Critical Historiography infers the human past from testimonies from the past and materials remains, and Cosmology infers the origins of the universe. By contrast, the Theoretical Sciences are not interested in any particular token event, but in types of events: Physics is interested in the atom, not in this or that atom at a particular space and time; Biology is interested in the cell, or in types of cells, not in this or that token cell; Economics is interested in modeling recessions, not in this recession; and Generative Linguistics studies “Language” not any particular language that existed in a particular time and was spoken by a particular group of people. The distinctions between realms of nature and academic disciplines may be epistemically and methodologically arbitrary. If from an epistemic and methodological perspectives, historiography, the study of the human past, has more in common with Geology than with the Social Sciences that have more in common with Agronomy than with historiography, we need to redraw the boundaries of philosophies of the special disciplines. This is of course highly controversial and runs counter to attempts to distinguish the historical sciences by the use of narrative explanations, reenactment or emphatic understanding.
The Historical Sciences may be distinguished from the Theoretical Sciences according to their objects of study, tokens vs. types; from Experimental Sciences according to their methodologies, inference from evidence vs. experimenting with it; and from natural sciences according to the realm of nature they occupy. The criteria philosophers proposed for these distinctions were related to larger issues in epistemology: Do the Historical Sciences and offer different kinds of knowledge? Do the Historical and Theoretical sciences support each other’s claims for knowledge, and if so, how?; metaphysics and ontology: Do the types of objects the Historical and Theoretical Sciences attempt to study, represent, describe, or explain differ, and if so, how does it affect their methodologies?; and the philosophy of science: What is science and how do the Historical and Theoretical Sciences relate to this ideal?
On the possibility and meaning of truth in the historical sciences
ABSTRACT. The familiar challenges to historiographical knowledge turn on epistemological concerns having to do with the unobservability of historical events, or with the problem of establishing a sufficiently strong inferential connection between evidence and the historiographical claim one wishes to convert from a true belief into knowledge. This paper argues that these challenges miss a deeper problem, viz., the lack of obvious truth-makers for historiographical claims. The metaphysical challenge to historiography is that reality does not appear to co-operate in our cognitive endeavours by providing truth-makers for claims about historical entities and events. Setting out this less familiar, but more fundamental, challenge to the very possibility of historiography is the first aim of this paper. The various ways in which this challenge might be met are then set out, including ontologically inflationary appeals to abstract objects of various kinds, or to “block” theories of time. The paper closes with the articulation of an ontologically parsimonious solution to the metaphysical challenge to historiography. The cost of this approach is a revision to standard theories of truth. The central claim here is that the standard theories of truth have mistaken distinct causes of truth for truth itself. This mistake leads to distorted expectations regarding truth-makers for historiographical claims. The truth-makers of historiographical claims are not so much the historical events themselves (for they do not exist) but atemporal modal facts about the order of things of which those events were a part.
ABSTRACT. The inference of origins distinguishes the historical sciences from the theoretical sciences. Scientific inferences of origins are distinct in inferring reliably probable pasts. They base their inferences of origins on information transmitted from origins, past events and processes, to present receivers, evidence. They include most obviously the origins of species in Evolutionary Biology, origins of languages in Comparative Historical Linguistics, origins of rock formations and the shapes of continents in Geology, the origins of the universe in Cosmology, the origins of texts in Textual Criticism, original historical events scientific Historiography, and the origins of forms of art and craft like pottery in Archaeology.
This paper analyses the concept of origin, its metaphysics and epistemology as distinct of those of causes. I argue that origins are tokens of types of information sources. Origins are past events that transmitted information that reached the present. Entities in the present that receive that information are receivers. Information preserved in receivers may be used to infer properties of their origins. Origin is a relational concept. As much as a cause can only be identified in relation to its effects and there are no causes without effects, origin can only be identified in relation to receivers and there are no origins without receivers. Origins transmit encoded information signals to receivers. There are many different types of information signals, transmission channels, and types of encoding: Background radiation travelled from the origin of the universe to scientific instruments today. Species transmit information about their properties and ancestry via DNA through reproduction to descendant species. During transmission, information passes through a period of latency when it is not expressed. Latency can vary in length from the age of the universe in the case of background radiation to the brief moment between sending and receiving an email. Information signals are mixed with varying levels of noise and have different levels of equivocation, loss of signal. Types and tokens of processes of encoding and decoding have varying levels of reliability-fidelity, information preservation at the end of the process of information transmitted from the origins at the beginning. Reliability reflects the ratio of preservation of information in receivers to the transmitted information. Some information is lost during transmission (equivocation) and noise that does not carry information is mixed with the signal. For example, we are all the descendants of “prehistoric” peoples. But the information they transmitted about themselves orally through traditions to contemporary societies was lost in few generations due to equivocation. What we can know about them is through information preserved in material and artistic objects and our DNA. Societies cannot transmit information reliably over centuries without a written form of language that can preserve information reliably. I clarify what origins are and how we come to know them by analyzing the conceptual and epistemic distinctions between origins and causes. This analysis justifies the introduction of origins as a new concept to epistemology and philosophy of science to supplement and partly replace philosophical discussions of causation.
Organizers: Valentin Goranko and Frederik Van De Putte
The concept of rational agency is broadly interdisciplinary, bringing together philosophy, social psychology, sociology, decision and game theory. The scope and impact of the area of rational agency has been steadily expanding in the past decades, also involving technical disciplines such as computer science and AI, where multi-agent systems of different kinds (e.g. robotic teams, computer and social networks, institutions, etc.) have become a focal point for modelling and analysis.
Rational agency relates to a range of key concepts: knowledge, beliefs, knowledge and communication, norms, action and interaction, strategic ability, cooperation and competition, social choice etc. The use of formal models and logic-based methods for analysing these and other aspects of rational agency has become an increasingly popular and successful approach to dealing with their complex diversity and interaction.
This symposium will bring together different perspectives and approaches to the study of rational agency and rational interaction in the context of philosophical logic.
The symposium talks are divided into three thematic clusters, each representing a session and consisting of 4-5 presentations, as follows.
I. Logic, Rationality, and Game-theoretic Semantics. Applying logic-based methods and formal logical systems to reasoning in decision and game theory is a major and increasingly popular approach to agency and rationality. Formal logical languages allow us to specify principles of strategic behaviour and interaction between agents, and essential game-theoretic notions, including solution concepts and rationality principles. Formal logical systems provide precise and unambiguous semantics and enable correct and reliable reasoning about these, while involving the concepts of knowledge, beliefs, intentions, ability, etc.
II. Deontic Logic, Agency, and Action. Logics of agency and interaction such as STIT and deontic logics have been very influential and generally appreciated approaches to normative reasoning and theory of actions. Active directions of research in this area include the normative status of actions vs. propositions, causality and responsibility, collective and group oughts and permissions, and further refinements of the STIT framework stemming from the works of Belnap, Horty and others.
III. Logic, Social Epistemology, and Collective Decision-making. Rational agency and interaction also presuppose an epistemological dimension, while intentional group agency is inextricably linked to social choice theory. In this thematic cluster, various logical and formal models are discussed that allow shedding light on these factors and processes.
ABSTRACT. The main problem in deontic logic based on propositional dynamic logic is how to define the normative status of complex actions based on the normative status of atomic actions, transitions and states. There are two main approaches to this problem in the literature: the first defines the normative status of an action in terms of the normative status of the possible outcome states of the action (Broersen, 2004; Meyer, 1988), while the second defines the normative status of an action in terms of the normative status of the transitions occurring in the possible executions of the action (van der Meyden, 1996).
In this work, I focus on interpretations of permission concepts. In particular, I address what I take to be two shortcomings in the two main approaches to permission in dynamic logic.
First, when assessing an agent's behavior from a normative viewpoint, one must often take into account both the results brought about by the agent, and the means by which those results were brought about. Consequently, when deciding whether a complex action is to be permitted or not, one must, in many cases, take into account both the normative status of the possible outcome states of the action, and the normative status of the atomic actions that occur in the complex action: choosing one of the two is not enough.
Second, most existing accounts, with the exception of the work of Kulicki and Trypuz (2015), consider the permissibility of actions only relative to their complete executions, i.e. the possible executions where each step in the complex action is carried out. However, in the presence of non-determinism it may happen that some initial part of a complex sequential action leads to a state where the remaining part of the action cannot be executed. This possibility can lead to counterintuitive consequences when one considers strong forms of permission in combination with non-deterministic choice. Such cases show that also partial executions of complex actions are important from a normative viewpoint.
Taking both permitted states and permitted atomic actions as primitive allows for a wide variety of permission concepts for complex actions to be defined. Moreover, the distinction between complete and partial executions of complex actions offers further options for defining permission concepts. Based on these points, I define a variety of permission concepts and investigate their formal properties.
References
Broersen, J. (2004). Action negation and alternative reductions for dynamic deontic logic. Journal of Applied Logic 2, 153-168.
Kulicki, P., and Trypuz, R. (2015). Completely and partially executable sequences of actions in deontic context. Synthese 192, 1117-1138.
Meyer, J.-J. Ch. (1988). A different approach to deontic logic: Deontic logic viewed as a variant of dynamic logic. Notre Dame Journal of Formal Logic 29(1), 109-136.
van der Meyden, R. (1996). The dynamic logic of permission. Journal of Logic and Computation 6(3), 465-479.
ABSTRACT. Suppose I believe sincerely and with conviction that today I ought to repay my friend Ann the 10 euro that she lent me. But I do not make any plan for repaying my debt: Instead, I arrange to spend my entire day at the local spa enjoying aromatherapy treatments.
This seems wrong. Enkrasia is the principle of rationality that rules out the above situation. More specifically, by (an interpretation of) the Enkratic principle, rationality requires that if an agent sincerely and with conviction believes she ought to X, then X-ing is a goal in her plan. This principle plays a central role within the domain of practical rationality, and has recently been receiving considerable attention in practical philosophy (see Broome 2013, Horty 2015).
This presentation pursues two aims. Firstly, we want to analyze the logical structure of Enkrasia in light of the interpretation just described. This is, to the best of our knowledge, a largely novel project within the literature. Much existing work in modal logic deals with various aspects of practical rationality starting from Cohen and Levesque's seminal 1990 paper. The framework presented here aims to complement this literature by explicitly addressing Enkrasia. The principle, in fact, bears some non-trivial conceptual and formal implications. This leads to the second aim of the talk. We want to address the repercussions that Enkrasia has for deontic logic. To this end, we elaborate on the distinction between so-called “basic oughts" and “derived oughts", and show how this distinction is especially meaningful in the context of Enkrasia. Moreover, we address issues related to the filtering of inconsistent oughts, the restricted validity of deontic closure, and the stability of oughts and goals under dynamics.
In pursuit of these two aims, we provide a multi-modal neighborhood logic with three characteristic operators: A non-normal operator for basic oughts, a non-normal operator for goals in plans, and a normal operator for derived oughts. Based on these operators we build two modal logical languages with different expressive powers. Both languages are evaluated on tree-like models of future courses of events, enriched with additional structure representing basic oughts, goals and derived oughts. We show that the two modal languages are sound and weakly (resp. strongly) complete with respect to the class of models defined. Moreover, we provide a dynamic extension of the logic by means of product updates.
ABSTRACT. In [2], Horty shows that the framework of STIT logic can be used to reason about what agents and groups ought to do in a multi-agent setting. To decide what groups ought to do he relies on a utility function that assigns a unique value to each possible outcome of their group actions. He then makes use of a dominance relation to define the optimal choices of a group. When generalizing the utilitarian models of Horty to cases where each agent has his own utility function, Horty’s approach requires each group to have a utility function as well.
There are several ways to do this. In [4], each group is assigned an independent utility function. This has the disadvantage that there is no connection between the preferences of a group and its members. Another option is to define the utility of a given outcome for a group of agents as the mean of the utilities of that outcome for the group’s members, as is done in [3]. However, this requires that utilities of individual agents be comparable. A third option is pursued in [5], where Turrini proposes to generalize Horty’s notion of dominance such that an action of a group X dominates another action X' just in case, relative to the utility function of each group member, X dominates X'. The optimal actions of a group can then be defined using this modified dominance relation. This approach, however, leads to undesirable outcomes in certain types of strategic interaction (e.g. a prisoner’s dilemma).
Here, we present a new approach towards evaluating group actions in STIT logic by taking considerations of reciprocity into account. By reciprocity we mean that agents can help each other reach their desired outcomes through choosing actions that are in each other’s interest. We draw upon the work of Grossi and Turrini [1] to identify certain group actions as having different types of reciprocal properties. For example, a group action can be such that, for each agent a in the group, there is some other agent a' such that the action of a' is optimal given the utility function of a. We compare these properties and show that by first selecting a certain type of reciprocal action and only then applying dominance reasoning we are left with group actions that have a number of desirable properties. Next, we show in which types of situations agents can expect to benefit by doing their part in these reciprocal group actions.
We then define what groups ought to do in terms of the optimal reciprocal group actions. We call the resulting deontic claims reciprocal oughts, in contradistinction to the utilitarian oughts of [2] and strategic oughts of [3]. We end by comparing each of these deontic operators using some examples of strategic interaction.
References
[1] Davide Grossi and Paolo Turrini. Dependence in games and dependence games. Autonomous Agents and Multi-Agent Systems, 25(2):284–312, 2012.
[2] John F. Horty. Agency and deontic logic. Oxford University Press, 2001.
[3]Barteld Kooi and Allard Tamminga. Moral conflicts between groups of agents. Journal of Philosophical Logic, 37(1):1–21, 2008.
[4] Allard Tamminga. Deontic logic for strategic games. Erkenntnis, 78(1):183– 200, 2013.
[5] Paolo Turrini. Agreements as norms. In International Conference on Deontic Logic in Computer Science, pages 31–45. Springer, 2012.
Of all philosophers of the 20th century, few built more bridges between academic disciplines than did Karl Popper. For most of his life, Karl Popper made contributions to a wide variety of fields in addition to the epistemology and the theory of scientific method for which he is best known. Problems in quantum mechanics, and in the theory of probability, dominate the second half of Popper's Logik der Forschung (1934), and several of the earliest items recorded in §2 ('Speeches and Articles') of Volume 1 of The Writings of Karl Popper, such as item 2-5 on the quantum-mechanical uncertainty relations, item 2-14 on nebular red-shifts, and item 2-43 (and other articles) on the arrow of time, show his enthusiasm for substantive problems in modern physics and cosmology. Interspersed with these were a number of articles in the 1940s on mathematical logic, and in the 1950s on the axiomatization of the theory of probability (and on other technical problems in this area). Later he made significant contributions to discussions in evolutionary biology and on the problem of consciousness. All these interests (except perhaps his interest in formal logic) continued unabated throughout his life.
The aim of this symposium is to illustrate, and to evaluate, some of the interventions, both substantive and methodological, that Karl Popper made in the natural and mathematical sciences. An attempt will be made to pinpoint the connections between these contributions and his more centrally philosophical concerns, especially his scepticism, his realism, his opposition to subjectivism, and his indeterminism.
The fields that have been chosen for the symposium are quantum mechanics, evolutionary biology, cosmology, mathematical logic, statistics, and the brain-mind liaison.
ABSTRACT. It is almost a truism to say that the philosophy of science systematized by Karl Popper was heavily influenced by the physics intellectual landscape. Indeed, the most conspicuous working example of his methodology of falsifiability as a criterion to discern science from other forms the knowledge was Einstein’s predictions drawn from his general relativity. While familiar with the great physical theories elaborated till the beginning of the 20th century, Popper kept a lasting fascination for the quantum theory. In addition there was among physicists a controversy over the interpretation and foundations of this scientific theory, which further attracted Popper. However, the very technical aspects of this controversy kept Popper far away from this controversy as some of his early incursions in the subject were target of criticisms. It was only from the 1960s on, with the blossoming of the interest in this scientific controversy and the appearance of a younger generation of physicists interested in the subject, that Popper could fulfill his early desire to take part of this controversy. Most of his ideas on the subject are gathered in the volume Quantum Theory and the Schism in Physics. Popper’s ideas may be encapsulated in the statement that he fully accepted the probabilistic descriptions and suggested his propensity interpretation to deal with it, thus without attachment to determinism, while criticized the introduction of subjectivist approaches in this scientific domain, thus aligned with the realist position in the quantum controversy. Less known is that Popper went further in his engagement with the debates over the meaning of the quanta. He could make this through the collaboration with physicists such as Jean-Pierre Vigier and Franco Selleri, who were hard critics of the standard interpretation of quantum physics. From this collaboration emerged a proposal of an experiment to test the validity of some presumptions of quantum theory. Initially conceived as an idealized experiment but eventually led to the lab benches by Yanhua Shih, it spurred a debate which survived Popper himself. We present an overview of Popper’s concerns on quantum mechanics as well as an analysis of the debates about the experiment he had suggested.
Freire Junior, O. – The Quantum Dissidents – Rebuilding the Foundations of Quantum Mechanics 1950-1990, Springer, Berlin (2015).
Popper, K.R., Bartley, W.W.: Quantum Theory and the Schism in Physics. Rowan and Littlefield, Totowa, NJ (1982)
Del Santo, F. Genesis of Karl Popper's EPR-like experiment and its resonance amongst the physics community in the 1980s, Studies in History and Philosophy of Modern Physics, 62, 56-70 (2018)
ABSTRACT. In this comment, I will discuss in detail the genesis of Popper’s EPR-like experiment, which is at the centre of Prof. Freire’s paper. I will show that Popper devised his experiment already in 1980, namely two years before the publication in his renown “Quantum Theory and the Schisms in Physics” (1982). Moreover, I will focus on the early resonance that such a Gedankenexperiment had in the revived debate on quantum foundations.
At the same time, I will describe how Popper’s role in the community of physicists concerned with foundations of quantum physics evolved over time. In fact, when he came back to problems of quantum mechanics in the 1950s, Popper strengthened his acquaintances with some illustrious physicists with philosophical interests (the likes of D. Bohm, H. Bondi, W. Yourgrau), but was not engaged in the quantum debate within the community of physicists (he did not publish in physics journals or participate in specialised physics conferences). From mid-1960s, however, with the publication of the quite influential essay “Quantum Mechanics without the Observer” (1967), Popper’s ideas on quantum physics garnered interest and recognition among physicists. At that time, Popper systematised his critique of the Copenhagen interpretation of quantum mechanics, proposing an alternative interpretation based on the concept of ontologically real probabilities (propensities) that met the favour of several distinguished physicists (among them, D. Bohm, B. van der Waerden and L. de Broglie). This endeavour led Popper to enter a long-lasting debate within the community of physicists.
ABSTRACT. Popper’s influence on science can be traced within many branches. It ranges from direct contributions, such as suggestions of experiments in quantum mechanics (e.g. the so-called Popper experiment, testing the Copenhagen interpretation) to mere inspiration, waking up scientists from their dogmatic slumber. Especially his criticism of instrumentalism and his advocacy of realism has been an eye-opener for many. As an illustration a case from the field of neuroscience is discussed in the paper. It relates to the development of theories about mechanisms underlying the nerve impulse. The central question after the pioneering studies by Hodgkin and Huxley was how the critical ions permeate the nerve cell membrane (Hille, 2001). Some experimentalists adopted a realistic view and tried to understand the process by constructing mechanistic models, almost in a Cartesian fashion. Others adopted an instrumentalistic, positivistic and allegedly more scientific view and settled for a mathematical black box description. When it finally was possible to experimentally determine the molecular details, they were found to fit the realistic, mechanistic attempts well, while the instrumentalistic attempts had not led far, thus supporting the Popperian view.
An important part of Popper’s philosophy concerns the mind-brain problem. The present paper discusses two aspects of his philosophy of mind. One aspect relates to the ontology of mind and the theory of biological evolution. During the years Popper’s interest in evolution steadily grew; from an almost negative patronizing view to giving it a central role in many of his later studies. In the theory of evolution Popper found support for his interactionistic view on the mind-brain problem. This, as will be discussed, is a view that for many philosophers is difficult to accept. Another aspect discussed is Popper’s observation that mind has similarities with forces and fields of forces. As Popper points out, the introduction of forces as such (in the dynamism of Newton) could have been used by Descartes’ adherents to avoid inconsistencies of the Cartesian dualism. But Popper has developed this idea further, comparing properties of mind with those of forces and fields of forces (Popper, Lindahl and Århem, 1993). His view has renewed the interest in force fields as a model for consciousness and the present paper discusses some recent hypotheses that claim to solve problems that attach to the dominant present-day mind-brain theories. Several authors even identify consciousness with an electromagnetic field (Jones, 2013). In contrast, Popper proposes that consciousness works via electromagnetic forces. (Lindahl and Århem, 1994). This has been criticized as violating thermodynamic conservation laws. The present paper discusses Popper’s defence against this argument. The paper also discusses a related hypothesis that consciousness act on fields of probability amplitudes rather than on electromagnetic fields. Such an idea has been proposed by Friedrich Beck in response to Popper’s hypothesis (Beck, 1996). The present paper argues that such models, based on quantum mechanical ideas, often are in conflict with Poppers propensity interpretation of quantum mechanics (Popper, 1982).
Hille, B. (2001). Ion channels of excitable membranes (3rd ed.). Sunderland, MA: Sinauer Associates.
Jones, M. W. (2013). Electromagnetic–field theories of mind. Journal of Consciousness Studies, 20(11-12), 124-149.
Lindahl, B. I. B., & Århem, P. (1994). Mind as a force field: Comments on a new
interactionistic hypothesis. Journal of Theoretical Biology, 171, 111-122.
Popper, K. R. (1982). Quantum Theory and the Schism in Physics. London: Hutchinson (from 1992 published by Routledge).
Popper, K. R., Lindahl, B. I. B., & Århem, P. (1993). A discussion of the mind–brain
problem. Theoretical Medicine, 14, 167-180.
Beck F. (1996) Mind-brain interaction: comments on an article by B.I.B. Lindahl & P.Arhem. Journal of Theoretical Biology 180, 87-89.
Caroline E. Murr (Universidade Federal de Santa Catarina - UFSC, Brazil)
Defamiliarization in science fiction: new perspectives on scientific concepts
ABSTRACT. The notion of defamiliarization, developed by Victor Shklovsky in his book “Theory of Prose” (1917), refers to a technique, used in literature, that has the effect of estranging things that are so familiar that we don’t even notice them anymore. They become automatized, due to familiarity and saturated contact. For Shklovsky, however, literature and art in general are able to disturb our common world views. John Dewey, in Art as Experience (1934), doesn’t mention the Russian author, but presents a similar standpoint. Dewey expounds the idea of “Aesthetic Experience”, which appears to have many similarities to Shklovsky’s approach, asserting that art awakens from the familiar and allows more meaningful and complete experiences.
This paper aims to analyze the use of scientific conceptions in science fiction, leading to a new way to look at them. This new glance modifies the trivial connections of current paradigms in science and also in everyday life. The shift to science fiction’s context would contribute to their defamiliarization, giving way to new possibilities of understanding. According to the examined authors, defamiliarization and aesthetic experience are responsible for bringing to consciousness things that were automatized, putting a new light over them. That appears also to be the case in science fiction, in which the break of expectations may have consequences not only to the paradigms of the sciences, but to the reflection about the role of science in ordinary life. In many cases, scientific notions are already made unconsciously accepted in quotidian life, just like everyday assumptions. Besides, science fiction, exaggerating or pushing scientific theories as far as can be imagined, brings about important and profound considerations regarding philosophical questions as well.
H. G. Wells’ works from around the turn of the 20th century seem to adequately illustrate this process. For instance, in H. G. Wells’ novel The invisible man (1897), scientific objects such as matter and light, as well as the prevalent scientific rules they obey, are displaced to another context, breaking usual expectations about them. In fiction, it is possible for matter and light to behave in a different way as the established paradigm in physics at the end of the 19th century permitted. It is also interesting to notice that the book was published in a time of crisis in physics, and it seems that Wells absorbed ideas that would change the paradigm in a few years. To claim that science fiction influenced some scientists to initiate scientific revolutions is maybe too large a step to take in this paper. Nevertheless, it is possible to say that the process of defamiliarization in the reading of science fiction can lead to a new understanding of scientific concepts, inducing reflections that would not be made if the regular context and laws of science were maintained.
ABSTRACT. Students in professional programs or graduate research programs tend to use an excess of vague pronouns in their writing. Reducing vagueness in writing could improve written communication skills, which is a goal of many professional programs. Moreover, instructor effectiveness in reducing vagueness in students’ writing could improve teaching for learning. Bertrand Russell (1923) argued that all knowledge is vague. This research provides evidence that vagueness in writing is mitigated with instructor feedback.
An empirical study tested the hypothesis that providing feedback on vague pronouns would increase clarity in students’ writing over an academic semester. Vague terms such as “this”, “it”, “there”, “those”, “what”, and “these” followed by a verb were highlighted, and a written comment drew students’ attention to vague terms: “Rewrite all sentences with highlighted vague terms throughout your paper for greater clarity in professional writing.”
Writing with “what”, “it”, and other vague pronouns allows students to apply course concepts or describe contexts without understanding either. A collaboration between instructor and student could improve clarity of information communicated by helping students explain their understanding of ideas or principles taught (Faust & Puncochar, 2016).
Eighty-six pre-service teachers and 36 education master’s candidates participated in this research. All participants wrote at a proficient level, as determined by passing scores on a Professional Readiness Examination Test in writing. The instructor and a trained assistant highlighted and counted vague pronouns over six drafts of each participant’s document. Inter-rater reliability using Cohen’s kappa was 0.923 (p < .001). Frequency of vague pronouns decreased noticeably with each successive draft. A repeated measures ANOVA on use of vague pronouns in a final free-write essay compared to use of vague pronouns in an initial free-write essay achieved a statistic of F(1,40) = 3.963 (p = 0.055).
Ninety percent of participants identified an increased awareness of the importance of eliminating vague pronouns to improve writing clarity on end-of-semester self-evaluations. As an example, “While I write now, I find myself using a vague term, but I stop and ask myself, “How can I eliminate this vague term to make my paper sound better?” This type of self-reflection I have never done before and I see a big improvement in the tone of my writing.”
This research provided information on effects of instructor feedback to reduce vague pronouns and thereby improve clarity of students’ writing. As Russell (1923) said, “I shall be as little vague as I know how to be ...” (p. 84).
References
Faust, D., & Puncochar, J. (2016, March). How does “collaboration” occur at all? Remarks on epistemological issues related to understanding / working with ‘the other’. Dialogue and Universalism: Journal of the International Society for Universal Dialogue, 26, 137-144. http://dialogueanduniversalism.eu/index.php/12016-human-identity/
Russell, B. (1923). Vagueness. Australasian Journal of Psychology and Philosophy, 1(2), 84-92. https://doi.org/10.1080/00048402308540623
10:00
Zhengshan Jiao (Institute for the History of Natural Science, Chinese Academy of Sciences, China)
The History of Science-related Museums: A Comparative and Cultural Study
ABSTRACT. Science-related museums are special kinds of museums that are concerned with science, technology, the natural world, and other related issues. Today, there are many science-realted museums worldwide operating in different styles, and playing different social roles such as collecting, conserving and exhibiting objects, researching relevant issues and educating the public. Through the different development process of science-related museums in the Western world and in China, we can say that science-related museums are outcomes of the influence of social and cultural conditions such as economy, local culture, policy, humans' views on science, and so on. The Western world is considered to be the birthplace of science-related museums, where the museums experienced different developments that includes natural history museum, traditional science and technology museum, and science centre. However, museums are imported goods for China. Foreigners and western culture affected the emergence of museums in China, while they are developing rapidly today.
ABSTRACT. We describe a notion of robustness for configurational causal models (CCMs, e.g. Baumgartner & Thiem (2015)), present simulation results to validate this notion, and compare it to notions of robustness in regression analytic methods (RAMs).
Where RAMs relate variables to each other and quantify net effects across varying background conditions, CCMs search for dependencies between values of variables, and return models that satisfy the conditions of an INUS-theory of causation. A such, CCMs are tools for case-study research: a unit of observation is a single case that exhibits some configuration of values of measured variables. CCMs automate the process of recovering causally interpretable dependencies from data via cross-case comparisons. The basic idea is that causes make a difference to their effects, and causal structures can be uncovered by comparing otherwise homogeneous cases where some putative cause- and effect-factors vary suitably.
CCMs impose strong demands on the analysed data, that are often not met in real-life data. The most important of these is causal homogeneity – unlike RAMs, CCMs require the causal background of the observed cases to be homogeneous, as a sufficient condition for the validity of the results. This assumption is often violated. In addition, data may include random noise, and lack sufficient variation in measured variables. These deficiencies may prevent CCMs from finding any models at all. Thus, CCM methodologists have developed model-fit parameters that measure how well a model accounts for the observed data, that can be adjusted to find models that explain the data less than perfectly.
Lowering model fit requirements increases underdetermination of models by data, making model choice harder. We performed simulations to investigate the effects that lowering model-fit requirements has on the reliability of the results. These reveal that given noisy data, the models with best fit frequently include irrelevant components – a type of overfitting. In RAMs, overfitting is remedied by robustness testing: roughly, a robust model is insensitive to the influence of particular observations. This idea cannot be transported to CCM context, which assumes a case-study -setting: one’s conclusions ought to be sensitive to cross-case variation. But this also makes CCMs sensitive to noise. However, a notion of robustness as the concordance of results derived from different models (e.g. Wimsatt (2007)) , can be implemented in CCMs.
We implement the notion of a robust model as one which agrees with many other models of same data, and does not disagree with many other models, in the causal ascriptions it makes. Simulation results demonstrate that this notion can be used as a reliable criterion of model choice given massive underdetermination of models by data. Lastly, we summarize the results with respect to what they reveal about the differences between CCMs and RAMs, and how they help to improve reliability of CCMs.
References:
Baumgartner, M. & Thiem, A. 2015. Identifying complex causal dependencies in configurational data with coincidence analysis. R Journal, 7, 1.
Wimsatt, W. 2007. Re-engineering philosophy for limited beings. Cambridge, MA: Harvard University Press.
Discontinuity and Robustness as Hallmarks of Emergence
ABSTRACT. In the last decades, the interest in the notion of emergence has steadily grown in philosophy and science, but no uncontroversial definitions have yet been articulated. Classical formulations generally focus on two features: irreducibility, and novelty. In the first case, an entity is emergent from another one if the properties of the former cannot be reduced to the properties of the latter. In the second case, a phenomenon is emergent if it exhibits novel properties not had by its component parts.
Despite describing significant aspects of emergent processes, both these definitions raise several problems. On the one hand, the widespread habit to identify emergent entities with entities that resist to reduction is nothing more than explaining an ambiguous concept through an equally puzzling notion. Just like emergence, in fact, reduction is not at all a clear, uncontroversial technical term. On the other hand, a feature such as qualitative novelty can easily appear to be an observer-relative property, rather than an indicator of the ontological structure of the world.
In view of the above, to provide a good model of emergence other features should be taken into consideration too, and the ones which I will focus on are discontinuity and robustness.
The declared incompatibility between emergence and reduction reflects the difference between the models of reality underlying them. While reductionism assigns to the structure of reality a mereological and nomological continuity, emergentism leaves room for discontinuity instead. The reductionist universe is composed of a small number of fundamental (micro)physical entities, and by a huge quantity of combinations of them. In this universe, the nature of the macroscopic entities depends upon that of the microscopic ones, and no physically independent property is admitted. Accepting the existence of genuine emergence, conversely, implies the claim that the structure of the world is discontinuous both metaphysically and nomologically. Matter is organized in different ways at different scales, and there are phenomena which are consequently scale-relative and have to be studied by different disciplines.
In this framework, emergence represents the specific trait had by macroscopic entities showing scale-relative properties which depend upon the organizational constraints of their components’ relationships rather than upon their individual properties. While the laws of physics are still true and valid across many scales, other laws and regularities emerge with the development of new organizational structures whose behavior is often insensitive to microscopic constraints. And that’s where the notion of robustness came into the picture. By robustness, it is intended the ability of a system to preserve its features despite fluctuations and perturbations in its microscopic components and environmental conditions. Emergent phenomena, therefore, rather than novel, are robust in their insensitivity to the lower level from which they emerge.
Emergence, therefore, does not describe atypical processes in nature, nor the way in which we (cannot) explain reality. It suggests, by contrast, that the structure of the world is intrinsically differentiated, and at each scale and organizational layer correspond peculiar emergent and robust phenomena exhibiting features absent at lower or higher scales.
REFERENCES
Batterman, R. W. (2001). The devil in the details: Asymptotic reasoning in explanation, reduction, and emergence. Oxford University Press.
Bedau, M. A. (1997). Weak Emergence. Philosophical Perspectives, 11, 375–399.
Cartwright, N. (1994). Fundamentalism vs. the Patchwork of Laws. In Proceedings of the Aristotelian Society (Vol. 94, pp. 279-292). Aristotelian Society, Wiley.
Crowther, K. (2013). Emergent spacetime according to effective field theory: From top-down and bottom-up. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 44(3), 321–328.
Dennett, D. C. (1991). Real patterns. The Journal of Philosophy, 88(1), 27-51.
Humphreys, P. (2016). Emergence. A Philosophical Account. NY: Oxford University Press.
Kitano, H. (2004). Biological robustness. Nature Reviews Genetics, 5(11), 826.
Laughlin, R. B. (2008). A different universe: Reinventing physics from the bottom down. NY: Basic books.
Oppenheim, P. & Putnam, H. (1958). Unity of science as a working hypothesis. In H. Feigh, M. Scriven, and G. Maxwell (Eds.), Concepts, Theories, and the mind-body Problem, Minnesota Studies in the Philosophy of science, Minneapolis: University of Minnesota, 3–36.
Pettit, P. (1993). A definition of physicalism. Analysis, 53(4), 213-223.
Pines, D. (2000). Quantum protectorates in the cuprate superconductors. Physica C: Superconductivity (341–348), 59–62.
Silberstein, M. (2012). Emergence and reduction in context: Philosophy of science and/or analytic metaphysics. Metascience 21(3): 627–641.
Wimsatt, W. (1997). Aggregativity: Reductive heuristics for finding emergence. Philosophy of Science 64 (4): 372–84.
Yates, D. (2016). Demystifying Emergence. Ergo, 3 (31), 809–844.
ABSTRACT. Levels of Being: An Egalitarian Ontology
This paper articulates and defends an egalitarian ontology of levels of being that solves a number of philosophical puzzles and suits the needs of the philosophy of science. I argue that neither wholes nor their parts are ontologically prior to one another. Neither higher-level properties nor lower-level properties are prior to one another. Neither is more fundamental; neither grounds the other.
Instead, whole objects are portions of reality considered in one of two ways. If they are considered with all of their structure at a given time, they are identical to their parts, and their higher-level properties are identical to their lower-level properties. For most purposes, we consider wholes in abstraction from most of their parts and most of their parts’ properties. When we do this, whole objects are subtractions of being from their parts—they are invariants under some part addition and subtraction.
The limits to what lower level changes are acceptable are established by the preservation of properties that individuate a given whole. When a change in parts preserves the properties that individuate a whole, the whole survives; when individuative properties are lost by a change in parts, the whole is destroyed. By the same token, higher-level properties are subtractions of being from lower-level properties—they are part of their realizers and are also invariant under some changes in their lower level realizers.
This account solves the puzzle of causal exclusion without making any property redundant. Higher-level properties produce effects, though not as many as their realizers. Lower-level properties also produce effects, though more than the properties they realize. For higher-level properties are parts of their realizers. There is no conflict and no redundancy between them causing the same effect.
As long as we focus on the right sorts of effects—effects for which higher-level properties are sufficient causes—to explain effects in terms of higher-level causes is more informative than in terms of lower level ones. For adding the lower-level details adds nonexplanatory information. In addition, tracking myriad lower level parts and their properties is often practically unfeasible. In many cases, we may not even know what the relevant parts are. That’s why special sciences are both necessary and useful: to find the sorts of abstractions that provide the best explanation of higher-level phenomena, whereas tracking the lower level details may be unfeasible, less informative, or both.
Given this egalitarian ontology, traditional reductionism fails because, for most scientific and everyday purposes, there is no identity between higher levels and lower levels. Traditional antireductionism also fails because higher levels are not wholly distinct from lower levels. Ontological hierarchy is rejected wholesale. Yet each scientific discipline and subdiscipline has a job to do—finding the explanations of phenomena at any given level—and no explanatory job is more important than any other because they are all getting at some objective aspect of reality.
There has been a tremendous development of computerized systems for artificial intelligence in the last thirty years. .
Now in some domains the machines get better results than men:
--playing chess or even Go, winning over the best champions,
--medical diagnosis (for example in cancerology)
--automatic translation,
--vision : recognizing faces in one second from millions of photos...
The successes rely on :
--progress in hardware technology, of computational speed and capacity of Big Data ..
--New ideas in the structure of the computers with the neural networks ,inspired originally from the structure of vision treatment in the brain ,and progress in mathematical algorithms for exploiting statistical data, extensions of Markovian methods.
These developments have led the main actors to talk about a new science, or rather a new techno-science: Machine learning, defined by the fact that it is able to improve its own capacities by itself ( see [L]).
Some opponents give various reasons for their scepticism , some following an old tradition of identification of ‘’Numbers ‘’ with the modern industrial civilization [ W ],some with theoretical arguments ,coming from the foundations of information and complexity theory ([ Ma ]),some doubting of the bayesian inferential approach to Science – refusing prediction without understanding [ Th ] which might lead to a radical attack on classical science ([H]).In particular the technique of neural networks created a new type of knowledge with a very particular mystery of ‘’ Black Box ‘’. We will describe the new kind of ‘’ truth without verificability ‘’ that is issued from this practice.
We will discuss carefully these various topics,in particular :
Is it a new science or a new techno-science ?
Where is the limit between the science of Machine Learning and various conjectural visions leading to the science-fiction’s ideas of transhumanism ?
What are the possible consequences of recent success of AI on our approach of language, of intelligence of man’s cognitive functioning in general. And finally what are the limits of this numerical invasion of the world ?
H Horgan J.The end of science, Broadway Books
L LeCun Le Deep Learning une révolution en intelligence artificielle,Collège de France,Février 2016
Ma Manin Y. Kolmogorov complexity as a hidden factor of scientific discourse : from Newton’s law to data mining.
Th Thom R. Prédire n’est pas expliquer. Eschel,1991
ABSTRACT. In the recent literature, there has been much discussion about the explainability of ML algorithms. This property of explainability, or lack thereof, is critical not only for scientific contexts, but for the potential use of those algorithms in public affairs. In this presentation, we focus on the explainability of bureaucratic procedures to the general public.The use of unexplainable black-boxes in administrative decisions would raise fundamental legal and political issues, as the public needs to understand bureaucratic decisions to adapt to them, and possibly exerts its right to contest them. In order to better understand the impact of ML algorithms on this question, we need a finer diagnosis of the problem, and understand what should make them particularly hard to explain. In order to tackle this issue, we turn the tables around and ask: what makes ordinary bureaucratic procedures explainable? A major part of such procedures are decision trees or scoring systems. We make the conjecture, which we test on several cases studies, that those procedures typically enjoy two remarkable properties. The first is compositionality: the decision is made of a composition of subdecisions. The second is elementarity: the analysis of the decision ends on easily understandable elementary decisions. The combination of those properties has a key consequence on explainability, which we call \emph{explainability by extracts}: it becomes possible to explain the output of a given procedure, through a contextual selection of subdecisions, without the need to explain the entire procedure. This allows bureaucratic procedures to grow in size without compromising their explainability to the general public.
In the case of ML procedures, we show that the properties of compositionality and elementarity correspond to properties of the segmentation of the data space by the execution of the algorithm. Compositionality corresponds to the existence of well-defined segmentations, and elementarity corresponds to the definition of those segmentations by explicit, simple variables. But ML algorithms can loose either of those properties. Such is the case of opaque ML, as illustrated by deep learning neural networks, where both properties are actually lost.This entails an enhanced dependance of a given decision to the procedure as a whole, compromising explainability by extracts. If ML algorithms are to be used in bureaucratic decisions, it becomes necesary to find out if the properties of compositionality and elementarity can be recovered, or if the current opacity of some ML procedures is due to a fundamental scientific limitation.
The Historical Basis for Algorithmic Transparency as Central to AI Ethics
ABSTRACT. This paper embeds the concern for algorithmic transparency in artificial intelligence within the history of technology and ethics. The value of transparency in AI, according to this history, is not unique to AI. Rather, black box AI is just the latest development in the 200-year history of industrial and post-industrial technology that narrows the scope of practical reason. Studying these historical precedents provides guidance as to the possible directions of AI technology, towards either the narrowing or the expansion of practical reason, and the social consequences to be expected from each.
The paper first establishes the connection between technology and practical reason, and the concern among philosophers of ethics and politics about the impact of technology in the ethical and political realms. The first generation of such philosophers, influenced by Weber and Heidegger, traced the connection between changes in means of production and the use of practical reason for ethical and political reasoning, and advocated in turn a protection of practical reasoning – of phronesis – from the instrumental and technical rationality valued most by modern production. More recently, philosophers within the postphenomenological tradition have identified techne within phronesis as its initial step of formation, and thus call for a more empirical investigation of particular technologies and their enablement or hindering of phronetic reasoning.
This sets the stage for a subsequent empirical look at the history of industrial technology from the perspective of technology as an enabler or hindrance to the use of practical reasoning and judgment. This critical approach to the history of technology reveals numerous precedents of significant relevance to AI that from a conventional approach to the history of technology focusing on technical description appear to be very different from AI – such as the division of labor, assembly lines, power machine tools and computer-aided machinery. What is revealed is the effect of most industrial technology, often quite intentional, in deskilling of workers by narrowing the scope of their judgment, whereas other innovations have the potential to expand the scope of workers’ judgment. In particular, this section looks like the use of statistics in industrial production, as it is the site of a nearly century-long tension between approaches explicitly designed to narrow or expand the judgment of workers.
Finally, the paper extends this history to contemporary AI – where statistics is the product, rather than a control on the means of production – and presents the debate on explainable AI as an extension of this history. This final section explores the arguments for and against transparency in AI. Equipped with the guidance of 200 years of precedents, the possible paths forward for AI are much clearer, as are the effects of each path for ethics and political reasoning more broadly.
Alfonso García Lapeña (UB University of Barcelona, LOGOS Research Group in Analytic Philosophy, Spain)
Scientific Laws and Closeness to the Truth
ABSTRACT. Truthlikeness is a property of a theory or a proposition that represents its closeness to the truth of some matter. In the similarity approach, roughly, the truthlikeness of a theory or a proposition is defined according to its distance from the truth measured by an appropriate similarity metric. In science, quantitative deterministic laws typically have a real function representation F(x_1,…,x_n) in an n-dimensional mathematical space (sometimes called the state-space). Suppose law A is represented by F_A(x) and the truth in question (the true connexion between the magnitudes) is represented by F_T(x). Then, according to Niiniluoto (1982, 1985, 1987, 1998), Kieseppä (1996, 1996) and Kuipers (2000), among others, we can define the degree of truthlikeness of a law A with the Minkowski metric for functions:
Tr(A)=d(A,T)=(∫|F_A(x)-F_T(x)|^k)^(1/k)
We will expose a counterexample to this definition presented by Thom (1975), Weston (1992) and Liu (1999) and a modification of it that we think is much more clear and intuitive. We will argue then that the problem lies in the fact that the proposal take Tr to be just a function of accuracy, but an accurate law can be completely wrong about the actual “causal structure” of the world. For example, if y=x then y’=x+sin(x) is a highly accurate law for many purposes, but totally wrong about the true relation between x and y. We will present a modification of d into a new metric d_an that defines the truthlikeness for quantitative deterministic laws according to two parameters: accuracy and nomicity. The first parameter is correctly measure by the Minkowski metric. The second parameter can be measure by the difference of the derivatives. Therefore (for some interval n, m):
Where Tr(A)=d_an(A,T) and Tr(A)>Tr(B) if and only if d_an(A,T)
Once defined in this way we can represent all possible laws regarding some phenomenon in a two dimensional space and extract some interesting insights. The point (0, 0) will correspond to the truth in question and each point will correspond to a possible law with a different degree of accuracy and nomicity. We can define level lines (sets of theories equally truthlike) and represent scientific progress as the move from a determinate level line to another closer to (0, 0), where scientific progress may be performed by a gain of accuracy and nomicity but in different degrees. We can define some values "a" of accuracy and "n" of nomicity under which we can consider laws to be truthlike in an absolute sense. We will see how can we rationally estimate this values according to the scientific practice.
Finally, we will apply our proposal d_an to a real case study. We will estimate the degrees of truthlikeness of four laws (Ideal gas model, Van der Waals model, Beattie–Bridgeman model and Benedict–Webb–Rubin model) regarding Nitrogen in its gas state. We will argue that
References
Kieseppä, I. A. (1996). Truthlikeness for Multidimensional, Quantitative Cognitive Problems.
Kieseppä, I. (1996). Truthlikeness for Hypotheses Expressed in Terms of n Quantitative Variables. Journal of Philosophical Logic, 25(2).
Kuipers, T. A. F. (2000). From Instrumentalism to Constructive Realism: On Some Relations Between Confirmation, Empirical Progress, and Truth Approximation. Springer.
Liu, C. (1999). “Approximation, idealization, and laws of nature”. Synthese 118.
Niiniluoto, I. (1982). “Truthlikeness for Quantitative Statements”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association.
Niiniluoto, I. (1985) “Theories, Approximations, and Idealizations”, Logic, Methodology, and Philosophy of Science VI1, North-Holland, Amsterdam.
Niiniluoto, I. (1987) Truthlikeness, Dordrecht: Reidel.
Niiniluoto, I. (1998). “Verisimilitude: The Third Period”, The British Journal for the Philosophy of Science.
Thom, R. (1975), “Structural Stability and Morphogenesis: An Outline of a General Theory”. Reading, MA: Addison-Wesley.
Weston, T. (1992) “Approximate Truth and Scientific Realism”, Philosophy of Science, 59(1): 53–74.
ABSTRACT. Some recent literature (Hicks and Elswyk 2015; Bhogal 2017) has argued that the non-Humean conceptions of laws of nature have a same weakness as the Humean conceptions of laws of nature. Precisely, both conceptions face a problem of explanatory circularity: Humean and non-Humean conceptions of laws of nature agree that the law statements are universal generalisations; thus, both conceptions are vulnerable to an explanatory circularity problem between the laws of nature and their instances.
In the literature, the terminology “explanatory circularity problem” has been used to designate two slightly different circularities. A first circularity is a full explanatory circularity, hereafter the problem of circularity C. Synthetically, a law of nature is inferred from an observed phenomenon and, thereafter, it is used to explain that same observed phenomena. Thus, an observed phenomenon explains itself. The other circularity is a problem of self-explanation, hereafter the problem of circularity SE. The problem of circularity SE is a sub-problem of the problem of circularity C. A law of nature explains an observed phenomenon, but the law includes that same phenomenon in its content.
Hicks and Elswyk (2015) propose the following argument for the problem of circularity C:
(P1) The natural laws are generalizations. (HUMEANISM)
(P2) The truth of generalizations is (partially) explained by their positive instances. (GENERALIZATION)
(P3) The natural laws explain their instances. (LAWS)
(P4) If A (partially) explains B and B (partially) explains C, then A (partially) explains C. (TRANSITIVITY)
(C1) The natural laws are (partially) explained by their positive instances. (P1 & P2)
(C2) The instances of laws explain themselves. (P3, P4, & C1) (Hicks and Elswyk 2015, 435)
They claim that this argument also applies to the non-Humean conceptions of laws of nature: “Humeans and anti-Humeans should agree that law statements are universal generalizations (…) If we’re right about this much, anti-Humeans are vulnerable to a tu quoque.” (Hicks and Elswyk 2015, 435)
The argument above can be reframed to underpin the problem of circularity SE.
(P1) The natural laws are generalizations (HUMEANISM)
(P2)* If the natural laws are generalizations, then the natural laws are (partially) constituted by their instances.
(P3) The natural laws explain their instances. (LAWS)
(C2) The instances of the law statements explain themselves.
In this presentation, I will discuss the premises of the above arguments. I will try to show that Armstrong’s necessitarian view of laws of nature (Armstrong 1983) – a non-Humean conception – is invulnerable to these explanatory circularity problems. At the end, I will analyse a semantic circular condition for unsuccessful explanations, recently proposed by Shumener (2018), regarding this discussion.
References
Armstrong, David. 1983. What Is a Law of Nature? Cambridge: Cambridge University Press.
Bhogal, Harjit. 2017. ‘Minimal Anti-Humeanism’. Australasian Journal of Philosophy 95 (3): 447–460.
Hicks, Michael Townsen, and Peter van Elswyk. 2015. ‘Humean Laws and Circular Explanation’. Philosophical Studies 172 (2): 433–443.
Shumener, Erica. 2018. ‘Laws of Nature, Explanation, and Semantic Circularity’. The British Journal for the Philosophy of Science. doi:10.1093/bjps/axx020.
Natalia Kozlova (MOSCOW STATE PEDAGOGICAL UNIVERSITY, Russia)
The problem of figurativeness in science: From communication to the articulation of scientific knowledge
ABSTRACT. In understanding the essentially rhetorical character of science, special attention should be paid to the place of figurativeness in research discourse. The central question of my presentation is whether it is possible in the absence of figurativeness to produce radically different meanings that will transform the conceptual space of science. In most cases, the role of figurativeness is reduced to the optimisation of knowledge transmission. The expressive power of language is meant to make the transmitted knowledge more accessible for another and, in the best case, to help to ‘transplant it [knowledge] into another, as it grew in [one’s] own mind’ (Bacon). One of the rhetoric elements most widely used in research discourse, the metaphor often becomes an irreplaceable vehicle, for it makes it possible to create an idea of the object, i.e. to generate a certain way to think about it. The use of figurative elements in scientific language translates both in communicative optimisation and, owing to the uniqueness of the interpretation process, in the discovery of new ways to understand the object. However, the role of figurativeness in research discourse is not limited to knowledge transmission. Despite the significance of communication (i.e. either explicit or implicit inclusion of another into the creative process of the self) for the development of a scientific ontology, the major source of intention is the cognising subject. Thus, in considering the role of figurativeness in scientific discourse, the focus should be shifted from the concept of communication to that of articulation, in other words, from another to the self. The function of figurativeness as a tool for the ‘articulation’ of knowledge is determined by the features of the scientific ‘articulation’ per se. The central question of the presentation can be supplemented with that whether it is possible to ‘capture’, to register the meaning-assigning synthesis of the elusive ‘actualities of consciousness’ (Husserl) beyond figurativeness. This concerns the ‘act of thought’ that unlocks the boundaries of meaning conventions and thus transforms scientific ontology. One can assume that the answer to this question is ‘no’. In this presentation, I will put forward arguments in support of this answer. To build and develop a theoretical model, the mere abstraction of the object is not sufficient. There is a need for a constant ‘live’ interest in the object, i.e. the persistent imparting of significance to it, accompanied by the segregation of the object from the existing ontology. Although not realized by the author, the mechanisms of figurativeness may assume this function and make it possible to isolate and ‘alienate’ the object, thus granting it the ‘intellectualised’ status and making it ‘inconvenient’. Always beyond the realm of convenience, figurativeness by default transcends the existing conceptual terrain. Sometimes, it refutes any objective, rationalised convenience. It even seems to be aimed against conveniences. It means an upsurge in subjectivity, which, in the best case, destroys the common sense of things that is embedded in the forms of communicative rationality. From this point of view, figurativeness is an essential feature of the ‘articulation’ of scientific knowledge.
This study is supported by the Russian Foundation for Basic Research within the project ‘The Rhetoric of Science: Modern Approaches’ No. 17-33-00066
Bibliography
Bacon, F. The Advancement of Learning. Clarendon Press, 1873 2. Husserl, E. Cartesian Meditations: An Introduction to Phenomenology. Translated by Dorion Cairns, Springer Science+Business Media, 1960.
Media memory as the object of historical epistemology
ABSTRACT. Introduction. Under modern conditions the influence of electronic media on social construction of historical memory is huge. Historical information is transferred to a digital format, not only archives and libraries accumulate the knowledge of the Past, but also electronic storages of databases. Written memory gives way to electronic memory, and development of Internet technologies provides access of a massive number of users to it.
Today the ideaof the Past is formed not only by the efforts of professional historians, but also Internet users. The set of individual images of history creates collective memory. Modern society is going through the memory boom which is connected with the ability of users to make knowledge of the Past and to transmit it through new media. Thus, the memory from personal and cultural space moves to the sphere of public media. This process is about the emergence of media memory.
Methods. The research of influence of media on individual and collective memory is based on M. McLuhan's works. Studying of social memory is carried out within M. Halbwachs's theory about «a social framework of memory», the theory of cultural memory of J. Assmann and the theory of «places of memory» P. Nora. The analysis of ideas of the Past is based on the methods of historical epistemology presented in H. White and A. Megill's works.
Discussion. A small number of studies is devoted to the influence of media on social memory. One of such works is the collective monograph "Silence, Screen, and Spectacle: Rethinking Social Memory in the Age of Information and New Media", edited by L. Freeman, B. Nyenas and R. Daniel (2014). The authors note that new social media change the nature of perception of the Present and Past, revealing the Past through metaphors «silence», «screen», and «performance».
The mediatization of society hasproduceda special mechanism of storage, conversion and transmission of information which changed the nature of production of historical knowledge and practice of oblivion. Also, the periods of storage of social information have changed.
According to theabove mentioned the author defines media memory as the digital system of storage, transformation, production and dissemination of information about the Past. Historical memory of individuals and communities is formedon the basis of media memory. Media memory can be considered as the virtual social mechanism of storing and oblivion, it has an opportunity to provide various forms of representation of history in daily occurrence space, to expand practice of representation of the Past and a commemoration and also to increase quantity creating and consuming memorial content.
Standing on the position of historical epistemology we can observe the emergence of new ways of the cognition of the Past. Media memory selects historical knowledge, including relevant information about the Past in the agenda, and subjecting to oblivion the Past with no social need. Also, there is segmentation of historical knowledge between various elements of the media sphere. It is embodied in a variety of historical Internet resources available to users belonging to different target audiences.
Media memory is democratic. It is created on the basis of free expression of thoughts and feelings by available language means. Photos and documentary evidence play equally important roles in the formation of ideas of the Past alongside with subjective perception of reality and estimating statements. Attempts to hide any historical information or withdraw it from public access lead to its greater distribution.
Conclusion. Media memory as a form of collective memory is set within the concept of the post-truth when the personal history and personal experience of reality replace objective dataforaparticular person.The knowledge of history gains new meanings, methods and forms, this in its turn makes researchers look for new approaches within historical epistemology.
10:00
Sophia Tikhonova (Saratov State University N G Chernyshevsky, Russia)
Knowledge production in social networks as the problem of communicative epistemology
ABSTRACT. Introduction. The communicative dimension of epistemological discourse is connected with the research of how communication forms influence the production of knowledge. The modern communication revolution is determined by a new social role of Internet technologies, which mediate the social communication of different level and open mass access to any kinds of communication. Development of Internet services of social networks gives users more and more perfect instruments of communication management. These tools give individuals the possibility to develop their own networks of any configuration despite the minimum information about partners and to distribute knowledge out of the traditional institutional schemes of the Modern. Distribution of social networks has cognitive effect because it ensures the mass users inclusion in the production of informal knowledge. The author believes that Internet content is a specific form of ordinary knowledge, including special discursive rules of production of knowledge, as well as the system of its verification and legitimation.
Methods. The research media influence on cognitive structures of communication is based on M. McLuhan's ideas; the analysis of network modes of production of knowledge is based on M. Granovetter and M. Castells's network approach; the cognitive status of Internet content is proved by means of the concept of ordinary knowledge of M.L. Bianca and P. Piccari. The author's arguments are based on the communication approach which brings closer the categories of social action, the communicative act and the act of cognition.
Discussion. Ordinary knowledge in epistemology is quite a marginal problem. A rather small amount of research is devoted to its development. One of the key works in this sphere is the collective monograph "Epistemology of Ordinary Knowledge", edited by M.L. Bianca and P. Piccari (2015). In this work M.L. Bianca proves the concept according to which ordinary knowledge is a form of knowledge which not only allows to get epistemic access to the world, but also includes development of the models of the world which possess different degree of reliability. The feature of this form is that ordinary knowledge can be reliable and relevant though it has no reliability of scientific knowledge.
The question of how the media sphere changes the formation of ordinary knowledge, remains poorly studied. In the beginning the technical principles of operating content determine the epistemic processes connected with complication of the structure of the message. The environment of ordinary knowledge formation is the thinking and the oral speech. Usage of the text causes splitting of initial syncretism of ordinary knowledge and increasing the degree of its reflexivity and its subordination to genre norms (literary, documentary, journalistic), i.e. initial formalization. Usage of basic elements of a media text (graphic, audio- and visual inserts) strengthens genre eclecticism and expands possibilities of the user self-expression, subjectcentricity and subjectivity of the message.
The dominance of subjective elements in advancement of media content is fixed by the neologism "post-truth". The author defines post-truth as the independent concept of media discourse possessing negative connotations and emphasizing influence of interpretations in comparison to factography. The communicative entity of post-truth comes down to the effect of belief as the personal and emotional relation to the subject of the message. The post-truth combines global with private, personalizes macro-events and facilitates the formation of their assessment for the recipient.
The post-truth as transmission of subjectivity is based on representation of personal subjective experience of world cognition, i.e. its core is ordinary knowledge, on the platform of which personal history, personal experience and personal truth are formed, replacing objective data.
The post-truth does not mean direct oblivion and depreciation of the truth. The emotionally charged attitude acts as the filter for the streams of diverse content in conditions of the information overload. Through the post-truth people also cognize, and, at the same time, express themselves, create identities and enter collective actions.
Conclusion. Communicative epistemology as the methodological project offers new prospects in the research of social networks production of knowledge. According to the author, social networks as the special channel transform ordinary knowledge to informal one.
ABSTRACT. The study of medical guidelines disagreement in the context of the epistemology of disagreement (Goldman, 2011, Christensen & Lackey, 2013) may strongly contribute to the clarification of epistemic peer disagreement problems encoded in scientific (medical) guidelines. Nevertheless, the clarification of peer disagreement under multiple guidelines may require further methodological development to improve cognitive grasp, given the great magnitude of data and information in them, as in the case of multi-expert decision-making (Garbayo, 2014, Garbayo et al., 2018). In order to fill this methodological gap, we propose an innovative computational epistemology of disagreement platform for the study of epistemic peer evaluations of medical guidelines. The main epistemic goal of this platform is to analyze and refine models of epistemic peer disagreement with the computational power of natural language processing to improve modeling and understanding of peer disagreement under encoded guidelines, regarding causal propositions and action commands (Hamalatian & Zadrozny, 2016). To that effect, we suggest to measure the conceptual distances between guidelines terms in their scientific domains with natural language processing tools and topological analysis to add modeling precision to the characterization of epistemic peer disagreement in its specificity, while contrasting simultaneously multiple guidelines.
To develop said platform, we study the breast cancer screening medical guidelines disagreement (CDC) as a test case. We provide a model theoretic treatment of propositions of breast cancer conflicting guidelines, map terms/predicates in reference to the medical domains in breast cancer screening and investigate the conceptual distances between them. The main epistemic hypothesis in this study is that medical guidelines disagreement of breast cancer screening, when translated into conflicting epistemic peers positions, may represent a Galilean idealization type of model of disagreement that discounts relevant peer characterization aspects thereof, which a semantic treatment of contradictions and disagreement may further help to clarify (Zadrozny, Hamatialam, Garbayo, 2017). A new near-peer epistemic agency classification in reference to the medical sub-areas involved may be required as a result, to better explain some disagreements in different fields such as oncology, gynecology, mastology, and family medicine. We also generate a topological analysis of contradictions and disagreement of breast cancer screening guidelines with sheaves, while taking in consideration conceptual distance measures, to further explore in geometrical representation continuities and discontinuities in such disagreements and contradictions (Zadrozny & Garbayo, 2018).
Bibliography:
CDC, “Breast Cancer Screening Guidelines for Women”, accessed 2017 at http://www.cdc.gov/cancer/breast/pdf/BreastCancerScreeningGuidelines.pdf
Christensen, D., Lackey, J. (eds.) The Epistemology of Disagreement: New Essays. Oxford University Press, 2013.
Garbayo, L. “Epistemic considerations on expert disagreement, normative justification, and inconsistency regarding multi-criteria decision making. In Ceberio, M & Kreinovich, W. (eds.) Constraint programming and decision making, 35-45, Springer, 2014.
Garbayo, L., Ceberio, M., Bistarelli, S. Henderson, J. “On modeling Multi-Experts Multi-Criteria Decision-Making Argumentation and Disagreement: Philosophical and Computational Approaches Reconsidered. In Ceberio, M & Kreinovich, W. (eds.) Constraint Programming and Decision-Making: Theory and Applications, Springer, 2018.
Goldman, A & Blanchard, T. “Social Epistemology”. In Oxford Bibliographies Online, OUP, 2011.
Hamalatian, H., Zadrozny, W. “Text mining of Medical Guidelines. In Proc. of the Twenty-Ninth Intern. Florida Artificial Intelligence Res. Soc. Cons.: FLAIRS-29. Poster Abstracts. AAAI.
Zadrozny, W; Garbayo, L. “A Sheaf Model of Contradictions and Disagreements”. Preliminary Report and Discussion.arXiv:1801.09036 ISAIM 2018, International Symposium on Artificial, 2018
Zadrozny, W; Hamatialam, H; Garbayo, L. “Towards Semantic Modeling of Contradictions and Disagreements: A Case Study of Medical Guidelines”.
ACL Anthology A Digital Archive of Research Papers in Computational Linguistics, 2017.
On the Infinite Gods paradox via representation in Classical Mechanics
ABSTRACT. The Infinite Gods paradox is introduced by Benardete (1964) in the context of his metaphysical problems of the infinite. Priest (1999) starts the discussion with the publication of a logical analysis and then follows the argument by Yablo (2000) in which he defends that the paradox contains a logical impossibility. This last conclusion achieves a broad acceptance in the scientific community but reasonings introduced by Hawthorne (2000), Uzquiano (2012) and Pérez Laraudogoitia (2016) imply the questioning of that idea.
Contextualised in this discussion, my communication is based on the introduction of a proposal for a representation of the Infinite Gods paradox in the strict context of Classical Mechanics. The objective of following such a methodology consists in deepening in the understanding of the paradox and clarifying the type of problem that underlies it using the analytical power of Classical Mechanics.The methodology consisting in analysing a metaphysical paradox in the context of a specific theory is in line with what Grümbaum (1967) defended concerning the analysis of the supertasks and has later been followed by other philosophers of science who introduce proposals of representation for different paradoxes of the infinite. Nevertheless, no strictly mechanical representation of the Infinite Gods paradox has been published yet.
The results of my mechanical analysis are in agreement with the violation of the “Change Principle” introduced by Hawthorne (2000). But in clear contrast to his contention, this is not a big metaphysical surprise but a simple and direct consequence of causal postulates implicit in Classical Mechanics. Furthermore, the analysis via my mechanical representation shows in a very simple way that the necessary condition that Uzquiano (2012) proposes for the existence of a “beffore-effect” is refutable. Finally, it also leads to conclude that the problem that underlies the paradox is not logical but causal, and thus, is in clear opposition to the reasoning defended by Yablo (2000). Consequently, next objective consists in explaining the diagnosis of what I consider is erroneous in this last argument.
In addition to the achievement of the main objective consisting in deepening in the understanding of the paradox and clarifying the type of problem that underlies it, the analysis of the problem of evolution via my mechanical representation possibilitates clarification on the type of interaction in it. This in itself is a conceptually interesting result in the theoretical context of Classical Mechanics.
References
1. Benardete, J. (1964). Infinity: An essay in metaphysics. Oxford: Clarendon Press.
2. Grümbaum, A. (1967). Modern science and Zeno paradoxes. Middleton: Wesleyan University Press.
3. Hawthorne, J. (2000). Before-effect and Zeno causality. Noûs, 34 (4), 622-633.
4. Pérez Laraudogoitia, J. (2016).Tasks, subtasks and the modern Eleatics. In F. Pataut (ed.), Truth, objects, infinity. Cham, Switzerland: Springer.
5. Priest, G. (1999). On a version of one of Zeno ́s paradoxes. Analysis, 59 (1), 1-2.
6. Uzquiano, G. (2012). Before-effect without Zeno causality. Noûs, 46 (2), 259-264.
7. Yablo, S. (2000). A reply to new Zeno. Analysis 60 (2), 148 -151.
ABSTRACT. According to the Encyclopedia on the Rights and Welfare of Animals, Anthropocentrism relates to any idea which suggests central importance, superiority and supremacy of man in relation to the rest of the world. Anthropocentrism denotes also that the purpose of Nature is to serve human needs and desires, based on the idea that man has the highest value in the world. (Fox, 2010) Even if anthropocentrism can be seen as a concept fitting mainly in the field of Environmental Ethics, we could say that it can be considered as a concept connected also with Science, as being a part of the scientific outlook to the world. Even if we claim that the scientific outlook is objective and not subjective, provided that this parameter is controllable, are we at the same time in the position to assert that our view of the world is free of anthropocentrism?
The branches of science which are more vulnerable to such a viewpoint, as their name may indicate, are the Humanities as they focus on man and the achievements of human culture. Such an approach is not expected by the so-called positive sciences. Nevertheless, the anthropocentric outlook is not avoided entirely. An example of this in Cosmology is the noted Anthropic Principle.
The main idea of the Anthropic principle, as we know it, is that the Universe seems to be "fine-tuned" in such a way, in order to allow the existence of intelligent life that can observe it. How a philosophical idea of "old cutting" such as the "intelligent design of the Universe” can intrude into the modern scientific outlook and why it is so resilient?
In my presentation, I will attempt to present briefly the anthropic principle and to answer the questions mentioned above. In addition, I will try to show how anthropocentrism contradicts with the human effort to discover the world. Also I will refer to the consequences of Anthropocentrism for Ethics and Science itself.
Ιndicative Βibliography
Bostrom, N. (2002). Anthropic Bias: Observation Selection Effects in Science and Philosophy. New York: Routledge.
Carter, Β., & McCrea, W. H. (1983). The Anthropic Principle and its Implications for Biological Evolution. Philosophical Transactions of the Royal Society of London. , 310 (1512), pp. 347-363.
Fox, M. A. (2010). Anthropocentrism. In M. Bekoff, Encyclopedia of Animal Rights and Animal Welfare (pp. 66-68). Santa Barbara, California: Greenwood Press.
There are several camps in the recent debates on the nature of scientific understanding. There are factivists and quasi-factivists who argue that scientific representations provide understanding insofar as they capture some important aspects of the objects they represent. Representations, the (quasi-)factivists say, yield understanding only if they are at least partially or approximately true. The factivist position has been opposed by the non-factivists who insist that greatly inaccurate representations can provide understanding given that these representations are effective or exemplify the features of interest. Both camps face some serious challenges. The factivists need to say more about how exactly partially or approximately true representations, as well as nonpropositional representations, provide understanding. The non-factivists are expected to put more effort into the demonstration of the alleged independence of effectiveness and exemplification from the factivity condition. The aim of the proposed symposium is to discuss in detail some of these challenges and to ultimately defend the factivist camp.
One of the biggest challenges to factivisim, the existence of non-explanatory representations which do not possess propositional content but nevertheless provide understanding, is addressed in ‘Considering the Factivity of Non-explanatory Understanding’. This paper argues against the opposition between effectiveness and veridicality. Building on some cases of non-explanatory understanding, the author shows that effectiveness and veridicality are compatible and that we need both.
A different argument for the factivity of scientific understanding provided by models containing idealizations is presented in ‘Understanding Metabolic Regulation: A Case for the Factivists’. The central claim of this paper is that such models bring understanding if they capture correctly the causal relationships between the entities, which these models represent.
‘Effectiveness, Exemplification, and Factivity’ further explores the relation between the factivity condition and its suggested alternatives – effectiveness and exemplification. The author’s main claim is that the latter are not alternatives to factivity, strictly speaking, insofar as they could not be construed without any reference to truth conditions.
‘Scientific Explanation and Partial Understanding’ focuses on cases where the explanations consist of propositions, which are only partially true (in the sense of da Costa’s notion of partial truth). The author argues that such explanations bring partial understanding insofar as they allow for an inferential transfer of information from the explanans to the explanandum.
What happens, however, when understanding is provided by explanations which do not refer to any causal facts? This question is addressed in ‘Factivity of Understanding in Non-causal Explanations’. The author argues that the factivity of understanding could be analyzed and evaluated by using some modal concepts that capture “vertical” and “horizontal” counterfactual dependency relations which the explanation describes.
Richard David-Rus (Institute of Anthropology, Romanian Academy, Romania)
Considering the factivity of non-explanatory understanding
ABSTRACT. One of the characteristics of the debate around factivity of understanding is its focus on explanatory sort of understanding. The non-explanatory kind was barely considered. The proposed contribution tries to take some steps in this direction and to suggest this way some possible points of an investigation.
The inquiry will look at the routes of realization of factivity in situations that were marked in the literature to instantiate non-explanatory understanding. Without holding on a specific account the investigation will take as reference suggestions offered by authors such as Lipton, Gijsbers or Kvanvig, though Lipton’s view involving explanatory benefits as the bearers or understanding will take a central stage.
The main quest will look at the differences between the issues raised by factivity in explanatory cases and non-explanatory ones. I will look at the variation & specificity of this routes in the different ways of instantiating this sort of understanding. One focus will be on the modality historical arguments and the ones from idealizations raised against supporting the non-factivity claim get contextualized in the non-explanatory cases of understanding. As some of the non-explanatory means do not involve propositional content the factivity issue has to be reassessed. I will therefore reject the pure reductvist view that non-explanatory forms are just preliminary incomplete forms of explanatory understanding i.e. proto-understanding (Khalifa 2017) and so to be considered just under the received view on factivity.
In the last part I will turn to a second point by reference to the previous discussion. The effectiveness condition was advanced by de Regt as an alternative to the veridicality condition. I will support a mixed view which states the need of including reference to both conditions. The cases of non-explanatory understanding, might better illuminate the way the two components are needed in combination. Moreover, in some non-explanatory cases one of the above conditions might take precedence over the other, as for example along the separation of the ones with propositional content (possible explanation, thought experiments) and the other of a non-propositional nature (e.g. manipulations, visualizations).
Understanding metabolic regulation: A case for the factivists
ABSTRACT. Factive scientific understanding is the thesis that scientific theories and models provide understanding insofar as they are based on facts. Because science heavily relies on various simplifications, it has been argued that the facticity condition is too strong and should be abandoned (Elgin 2007, Potochnik 2015). In this paper I present a general model of a metabolic pathway regulation by feedback inhibition to argue that even highly simplified models that contain various distortions can provide factive understanding. However, there is a number of issues that need to be addressed first. For instance, the core of the disagreement over the facticity condition for understanding revolves around the notion of idealization. Here, I show that the widely used distinction between idealizations and abstractions faces difficulties when applied to the model of a metabolic pathway regulation. Some of the key assumptions involved in the model concern the type of inhibition and the role of concentrations. Contra Love and Nathan (2015) I suggest to view these assumptions as a special sort of abstraction, as vertical abstraction (see also Mäki 1992). Usually, it is the idealizations that are considered problematic for the factivist position because idealizations are thought to introduce distortions into the model, something abstractions do not do. However, I show that here abstractions distort key difference-makers (i.e. type of inhibition and the role of concentration), much like idealizations do elsewhere. This seemingly further supports the nonfactivist view, since if abstractions may involve distortions then not only idealized models but abstract models as well cannot provide factive understanding. I argue that this is not the case here. The diagrammatic model of a metabolic pathway regulation does provide factive understanding insofar as it captures the causal organization of an actual pathway, notwithstanding the distortions. I further motivate my view by drawing an analogy with the way in which Bokulich (2014) presents an alternative view of the notions of how-possibly and how-actually models. The conclusion is that, at least in some instances, highly simplified models which contain key distortions can nevertheless provide factive understanding, provided we correctly specify the locus of truth.
Bokulich, A. [2014]: ‘How the Tiger Bush Got Its Stripes: “How Possibly” vs. “How Actually” Model Explanations’, Monist, 97, pp. 321–38.
Elgin, C. [2007]: ‘Understanding and the Facts’, Philosophical Studies, 132, pp. 33–42.
Love, A. C. and Nathan, M. J. [2015]: ‘The Idealization of Causation in Mechanistic Explanation’, Philosophy of Science, 82, pp. 761–74.
Mäki, U. [1992]: ‘On the Method of Isolation in Economics’, in C. Dilworth (ed.), Idealization IV: Intelligibility in Science, Amsterdam: Rodopi, pp. 319–54.
Potochnik, A. [2015]: ‘The Diverse Aims of Science’, Studies in History and Philosophy of Science Part A, 53, pp. 71–80.
ABSTRACT. The view that scientific representations bear understanding insofar as they capture certain aspects of the objects being represented has been recently attacked by authors claiming that factivity (veridicality) is neither necessary nor sufficient for understanding. Instead of being true, partially true, or true enough, these authors say, the representations that provide understanding should be effective, i.e. they should lead to “useful scientific outcomes of certain kind” (de Regt & Gijsbers, 2017) or should “exemplify features they share with the facts” (Elgin, 2009). In this paper I’ll try to show that effectiveness and exemplification are neither alternatives nor independent complements to factivity insofar as an important aspect of these conditions cannot be construed without referring to a certain kind of truthfulness.
Although Elgin’s and de Regt and Gijsbers’ non-factive accounts of understanding differ in the details, they share an important common feature: they both stress the link between understanding and inference. Thus, according to De Regt and Gijsbers, the understanding-providing representations allow the understander to draw “correct predictions”, and according to Elgin, such representations enable “non-trivial inference” which is “responsive to evidence”. If we take this inferential aspect of understanding seriously, we should be ready to address the question what makes the conclusions of the alleged inferences correct. It seems as if there is no alternative to the view that any kind of inference could reliably lead to correct, i.e. true (or true enough) conclusions only if it is based on true (or true enough) premises. Indeed, it can be shown that the examples, which the critics of the factivity of understanding have chosen as demonstrations of non-factive understanding could be successfully analyzed in terms of true enough premises endorsing correct conclusions. Thus the ideal gas model, although based on a fiction (ideal gases do not exist), “exemplifies features that exist”, as Elgin herself has noticed. Similarly, the fluid model of electricity, discussed by de Regt and Gijsbers, gets right the directed motion of the electrical current, which is essential for the derivation of Ohm’s law and for its practical applications.
To sum up, the non-factivists have done a good job by stressing the inferential aspects of understanding. However, it should be recognized that there is no way to make reliably correct predictions and non-trivial inferences, if the latter are not based on true, partially true, or true enough premises. The understanding-providing scientific representations either contain such premises or serve as “inference tickets” bridging certain true or true enough premises to true or true enough conclusions.
References
De Regt, H. W., Gijsbers, V. (2017). How false theories can yield genuine understanding. In: Grimm, S. R., Baumberger, G., Ammon, S. (Eds.) Explaining Understanding. New Perspectives from Epistemology and Philosophy of Science. New York: Routledge, 50–75.
Elgin, C. Z. (2009). Is understanding factive? In: Pritchard, D., Miller, A., Haddock, A. (Eds.) Epistemic Value. Oxford: Oxford University Press, 322–30.
Siebel's argument against Fitelson's measure of coherence reconsidered
ABSTRACT. This talk aims at showing that Mark Siebel’s (2004) counterexample to Branden Fitelson’s (2003) probabilistic measure of coherence can be strengthened and thereby
extended to an argument against a large number of other proposals including the
measures by Shogenji (1999), Douven and Meijs (2007), Schupbach (2011), Schippers
(2014), Koscholke (2016) and also William Roche’s (2013) average mutual firmness
account which has not been challenged up to now. The example runs as follows:
There are 10 equally likely suspects for a murder and the
murderer is certainly among them. 6 have committed a robbery and a pickpock-
eting, 2 have committed a robbery but no pickpocketing and 2 have committed
no robbery but a pickpocketing.
Intuitively speaking, the proposition that the murderer is a robber and the proposition that the murderer is a pickpocket are quite coherent in this example. After all, there is a large overlap of pickpocketing robbers. However, as Siebel has pointed out, Fitelson’s measure indicates that they are not.
Siebel’s example is compelling. But it shows us much more. First, for any two
propositions φ and ψ under a probability function P such that P(¬φ∧¬ψ) = 0 which
is the case in Siebel’s example, any measure satisfying Fitelson’s (2003) dependency
desiderata is unable judge the set {φ,ψ} coherent, even in cases where it should. As already mentioned, this includes the measures proposed by Fitelson, Shogenji, Douven and Meijs, Schupbach, Schippers, Koscholke and many potential ones. Second, it can be shown that under a slightly stronger constraint, i.e. for any two propositions φ and ψ under a probability function P such that P(¬φ∧¬ψ) = P(φ∧¬ψ) = 0, Roche’s average mutual firmness account is unable to judge the set {φ,ψ} incoherent, even in cases where it should—this can be motivated by slightly modifying Siebel’s example. These two results suggest that the aforementioned proposals do not generally capture
coherence adequately.
References
Douven, I. and Meijs, W. (2007). Measuring coherence. Synthese, 156:405–425.
Fitelson, B. (2003). A probabilistic theory of coherence. Analysis, 63:194–199.
Koscholke, J. (2016). Carnap’s relevance measure as a probabilistic measure of coherence. Erkenntnis, 82(2):339–350.
Roche, W. (2013). Coherence and probability: a probabilistic account of coherence.
In Araszkiewicz, M. and Savelka, J., editors, Coherence: Insights from Philosophy,
Jurisprudence and Artificial Intelligence, pages 59–91. Springer, Dordrecht.
Schippers, M. (2014). Probabilistic measures of coherence: from adequacy constraints
towards pluralism. Synthese, 191(16):3821–3845.
Schupbach, J. N. (2011). New hope for Shogenji’s coherence measure. British Journal
for the Philosophy of Science, 62(1):125–142.
Shogenji, T. (1999). Is coherence truth conducive? Analysis, 59:338–345.
Siebel, M. (2004). On Fitelson’s measure of coherence. Analysis, 64:189–190.
09:30
Vladimir Reznikov (Institute of philosophy and law of SB RAS, Russia)
Frequency interpretation of conditions for the application of probability theory according to Kolmogorov
ABSTRACT. In the well-known book by Kolmogorov, devoted to the axiomatic theory of probability, the requirements for probabilities were formulated in the context of their applications [1]. Why, in that publication, A.N. Kolmogorov turned to the problem of applying mathematics? The answer was given in a later work by Kolmogorov; he noted that successes in substantiating mathematics overshadowed an independent problem: “Why is mathematics applicable for describing reality” [2].
Kolmogorov formulated the following requirements for probabilities in the context of applications:
«A. One can practically be sure that if the set of conditions S is repeated a large number of times n and if m denotes the number of cases in which the event A occurred, then the ratio m/n will differ little from P(A).
B. If P(A) is very small, then one can practically be sure that, under a single realization of conditions S, event A will not take place» [1, P. 4].
The first requirement is an informal version of von Mises’ asymptotic definition of probability. The second condition describes Cournot’s principle in a strong form. These requirements are bridges that connect probability theory and mathematical statistics with reality. However, in the known contemporary literature, the question on compatibility of Kolmogorov’s requirements has not been studied till the works of Shafer and Vovk [3]. As Shafer and Vovk noted, Borel, Levi and Frechet criticized the redundancy of condition A, since they believed that its formal description is the conclusion of Bernoulli's theorem. The report considers the frequency interpretation of condition A, since Kolmogorov noted that in the context of applications he follows Mises, the founder of the frequency interpretation.
The main thesis of the report is that condition A in the frequency interpretation is not the conclusion of Bernoulli's theorem. As is known, the conclusion of the theorem asserts that the frequency of a certain event A and the probability of event A, the probability is a constant, are close in probability. In the report I prove that, in the frequency interpretation, condition A is interpreted geometrically. I present the following arguments in defense of the thesis.
First, in the frequency interpretation, the probabilities of events do not exist a priori, they are representatives of the frequencies of these events. It is natural to consider that a constant probability of an event exists if the frequency characteristics of this event turn out to be stable, for example, they occupy a small interval. Secondly, the geometrical explication of condition A is quite consistent with the definition of probability in von Mises’ frequency interpretation, since Mises defines probability on the basis of convergence of frequencies defined in mathematical analysis. Thirdly, our thesis gets support on the basis of the principle of measure concentration proposed by V. Milman. According to this principle, the functions of very many variables, for example, on a multidimensional sphere and other objects turn out to be almost constant. In accordance with this principle, functions of a large number of observations that calculate frequencies turn out to be almost constant. Thus, condition A does not depend on Bernoulli's theorem, but, on the contrary, turns out to be a precondition for the application of this theorem.
Hugo Tannous Jorge (Birkbeck, University of London (United Kingdom) and Federal University of Juiz de Fora (Brazil), UK)
The problem of causal inference in clinical psychoanalysis: a response to the charges of Adolf Grünbaum based on the inductive principles of the historical sciences.
ABSTRACT. Aside from a therapy, clinical psychoanalysis can also be a method to produce knowledge on prominent human dimensions, such as mental suffering, sociability and sexuality. The basic premise that justifies clinical psychoanalysis as a research device is that the psychoanalyst’s neutral and abstinent questioning would promote the patient’s report of uncontaminated mental phenomena, that is, mental phenomena unregulated by immediate social demands. The method should draw out evidence for the inference of particular causal relations between the patient’s mental representations, mainly memories and fantasies, and between these and the patient’s actions and emotions. In his epistemological critique on psychoanalysis, Adolf Grünbaum claims that the formalization of this method by Sigmund Freud does not present the conditions to cogently test causal hypotheses. Two of Grünbaum’s arguments specifically against the logic of causal inference in clinical psychoanalysis are analysed, the argument around inference based on thematic kinship and the one around post hoc ergo propter hoc fallacy. It is defended that both arguments are valid, but also that their premises are artificial. These premises are confined to the Freudian text and disregard the potential that the Freudian method has of becoming cogent without losing its basic features. Departing from these arguments, this work discusses the epistemological potential of the method of causal inference in clinical psychoanalysis by describing some of its inductive principles and by exploring the justification of these principles. This work reaches the conclusion that the inductive principles of clinical psychoanalysis and the ones of the historical sciences are similar in the sense that they all infer retrospectively to the best explanation with the support of “bootstrapping” auxiliary hypotheses and they all make general inferences through meta-analysis of case reports. In the end, this work discusses some responses to the justificatory burden of these inductive principles in the context of clinical psychoanalysis.
References:
Cleland, C. E. (2011). Prediction and Explanation in Historical Natural Science. British Journal for the Philosophy of Science, 62, 551–582.
Dalbiez, R. (1941). Psychoanalytical method and the doctrine of Freud: Volume 2: Discussion. London: Longmans Green.
Glymour, C. (1980). Theory and Evidence. Princeton, N.J.: Princeton University Press.
Glymour, C. (1982). Freud, Kepler, and the clinical evidence. In R. Wollheim & J. Hopkins (Eds.), Philosophical Essays on Freud (pp. 12-31). Cambridge: Cambridge University Press.
Grünbaum, A. (1984). The Foundations of Psychoanalysis: A Philosophical Critique. Berkeley, CA: University of California Press.
Grünbaum, A. (1989). Why Thematic Kinships Between Events Do Not Attest Their Causal Linkage. In R. S. Cohen (Ed.), An Intimate Relation: Studies in the History and Philosophy of Science (pp. 477–494). Dordrecht, The Netherlands: Kluwer Academic Publishers.
Hopkins, J. (1996). Psychoanalytic and scientific reasoning. British Journal of Psychotherapy, 13: 86–105.
Lipton, P. (2004). Inference to the Best Explanation. London and New York: Routledge.
Lynch, K. (2014). The Vagaries of Psychoanalytic Interpretation: An Investigation into the Causes of the Consensus Problem in Psychoanalysis. Philosophia (United States), 42(3), 779–799.
Wallace IV, E. R. (1985). Historiography and Causation in Psychoanalysis. Hillsdale, N.J.: Analytic Press.
09:30
Vladimir Medvedev (St. Petersburg State maritime Technical University, Russia)
Explanation in Humanities
ABSTRACT. There are two approaches in the treatment of humanities. The naturalistic one denies any substantial differences between human and natural sciences. The most consistently this line was realized in positivist philosophy, where the classic scheme of scientific explanation (Popper – Hempel scheme) was formulated. According to it, explanation of every event may be deductively inferred from two classes of statements: universal laws and sentences fixing initial conditions of an event.
Anti-naturalistic thinkers proved that understanding is a specific means of cognition in humanities. Dilthey treated it in a radically irrationalistic mode – as empathy. Such understanding was regarded by naturalists not as a genuine method, but as a heurictic prelude to explanation. Discussions on the interrelations of understanding and explanation and on the applicability of the classical scheme of explanation in humanities are still going on. Usual arguments against naturalism are following.
What is explained in humanities is not an outward object, external to us, like that of natural sciences. What is studied here (society, cultural tradition) is a part of ourselves, something that has formed us as subjects of knowledge. Social reality cannot become an ordinary object of knowledge because we belong to it. History and social life are not a performance for a subject who does not take part in it. Knowledge about people and society has transcendental character in Kant’s sense: it refers to general conditions of our experience. The specific nature of subject-object relations in human and social sciences manifests itself also in the fact, that our knowledge and conception of social reality is an important part of this reality.
The universal scheme of scientific explanation is connected to the technological model of knowledge: the goal of explanation is practical use of phenomena, manipulation. To realize this model in human sciences we should have divided society into subjects and objects of knowledge and manipulation. And the latter should be deprived of the access to knowledge about themselves. After all, in contrast to other objects of knowledge people are able to assimilate knowledge about themselves and to change their behavior.
Explanation should be used in human and social sciences. But manipulation cannot be its purpose. The model of critical social sciences of Apel and Habermas presumes the use of explanational methods in a hermeneutic context. The goal here is not to explain others, but to help us to understand ourselves better. For example, Marx’s and Manheim’s critic of ideology gives causal explanation of the ideological illusions’ formation. This explanation has the same character as in natural sciences. But the main principle of the sociology of knowledge denies the possibility of objective and neutral social knowledge. A subject of such knowledge cannot occupy an ideologically undetermined position in order to expose other’s ideological illusions. Such subject should be attentive to possible social determination of his own ideas, to their possible ideological nature.
Such is the goal of human and social sciences. Explanational methods serve there general hermeneutic task – their function is to deepen human self-understanding.
Experiments in History and Archaeology: Building a Bridge to the Natural Sciences?
ABSTRACT. The epistemic challenges to the historical sciences include the direct inaccessibility of their subject matters and limited empirical data whose scope and variety cannot be easily augmented. The output of historiographic research is rarely in the form of universal or general theory. Nonetheless, these properties do not distinguish the historical sciences from other disciplines. The historical sciences have been successful in generating knowledge of the past.
One of the methods common to the natural sciences that historians and archaeologists pursue in order to bridge different academic cultures is the experimental method, most clearly manifest in experimental archaeology. This paper examines the use of experiments in historical and archaeological research and situate them in relation to contemporary philosophies of historiography. Experiments in historiography can take many forms – they can be designed based on textual, pictorial, or other non-textual evidence including fragments of artefacts; they can take place inside the laboratories or in the field. Designers of experiment can aim to describe an exact occurrence in the past (e. g. specific event), types of production techniques, to interpret technical texts, or to inquire into the daily life of our ancestors. However, can the results of such experiments cohere with other scientific historical methods? Can experiment in archaeology truly verify or falsify historiographic hypotheses? Is the experimental method suitable for historical research and to what extent? How do we represent the results of experimental archaeology? These questions accompanied by individual examples of experimental archaeology are discussed in relation to the constructivist approach to historiography and in relation to historical anti‑realism. It is argued that despite the fruitfulness of some experiments, their results generally suffer from the same underdetermination as other historiographic methods and theories.
ABSTRACT. The narrative turn in the philosophy of historiography relies on a constructivist epistemology motivated by the rejection of the view that there is any such thing as immediate knowledge of the past. As there is no such thing as knowledge of things as they are in themselves generally speaking, so there is no knowledge of the past in itself. Some narrativists characterise the temporal distance between the agents and the historian in positive terms and present it as an enabling condition of historical knowledge, because, so they argue, the significance of an historical event is better grasped retrospectively, in the light of the chain of events it set in motion. Others, on the other hand, see the retrospective nature of historical knowing as a sort of distorting mirror which reflects the historian’s own zeitgeist. Historical knowledge, so the argument goes, requires conceptual mediation, but since the mediating concepts are those of the historian, each generation of historians necessarily re-writes the past from their own perspective, and there can never be anything as “the past as it always was” (Dray). To use a rather old analogy one might say that as the form of the cookie cutter changes, so does the shape of the cookie cut out of the dough.
This paper argues that there is a better way of preserving the central narrativist claim that the past cannot be known in-itself, one which does not require biting the bullet that the past needs to be continuously re-written from the standpoint of the present. To do so one needs to rethink the notion of mediacy in historical knowledge. We present this alternative conception of mediacy through an explication and reconstruction of Collingwood’s philosophy of history. According to Collingwood the past is known historically when it is known through the eyes of historical agents, as mediated by their own zeitgeist. The past is therefore not an ever-changing projection from different future “nows”. While human self-understanding changes over time (the norms which govern how a medieval serf should relate to his lord are not the same as those which govern the relation between landlord and tenant in contemporary London) the norms which governed the Greek, the Romans, the Egyptian or the Mesopotamian civilization remain what they always were. It is the task of the historian to understand events as they would have been perceived by the historical agents, not in the light of legal, epistemic or moral norms that are alien to them. Fr example, understanding Caesar’s crossing of the Rubicon as challenging the authority of the senate (rather than, say, simply talking a walk with horses and men) involves understanding the Roman legal system and what Republican law entailed. This is a kind of conceptual knowledge that is not altered either by the future course of events or by changes in human self-understanding. Although Collingwood’s account of the nature of mediacy in historical knowledge would disallow that later historians should/could retrospectively change the self-understanding of the Romans (or the Egyptians, or the Greeks), the claim that historians can know the past as the Egyptians, the Romans or the Mesopotamians did, is not tantamount to claiming that the past can be known in itself. It is rather the assertion that the past is known historically when it is known through the eyes of the historical agent, not those of the historian. This conception of mediacy takes the past to be always-already mediated (by the conceptual framework of the agent) and, unlike the cookie-cutter conception of knowledge, does not lead to the sceptical implications which go hand in hand with the narrativist conception of mediacy.
Stit heuristics and the construction of justification stit logic
ABSTRACT. From its early days, stit logic was built around a set of heuristic principles that were typically phrased as recommendations to formalize certain ideas in a certain fashion. We have in mind the set of 6 stit theses advanced in [1, Ch. 1]. These theses mainly sought to guide the formalization of agentive sentences. However, it is often the case that one is interested in extending stit logic with new notions which are not necessarily confined to agentive phenomena; even in such cases one has to place the new notions in some relation to the existing stit conceptual machinery which often involves non-trivial formalization decisions which are completely outside the scope of the Belnapian stit theses.
The other issue is that the preferred stit operator of [1] is achievement stit, whereas in the more recent literature the focus is on different variants of either Chellas stit or deliberative stit operator.
In our talk we try to close these two gaps, by (1) reformulating some of the Belnapian theses for Chellas/deliberative stit operator, (2) developing heuristics for representing non-agentive sentences in stit logic, and (3) compensating for the absence of achievement stit operator by introducing the so-called `fulfillment perspective' on modalities in stit logic.
In doing so, we introduce a new set of heuristics, which, we argue, is still in harmony with the philosophy expressed in [1]. We then apply the new heuristic principles to analyze the ideas behind the family of justification stit logics recently introduced in [2] and [3].
References
[1] N. Belnap, M. Perloff, and M. Xu. Facing the Future: Agents and Choices in Our
Indeterminist World. Oxford University Press, 2001.
[2] G. Olkhovikov and H. Wansing. Inference as doxastic agency. Part I: The ba-
sics of justification stit logic. Studia Logica. Online first: January 27, 2018,
https://doi.org/10.1007/s11225-017-9779z.
[3] G. Olkhovikov and H. Wansing. Inference as doxastic agency. Part II: Ramifications and refinements. Australasian Journal of Logic, 14:408-438, 2017.
ABSTRACT. Imagine that I place all the cards from a deck face down on a table and ask you to turn over the Queen of Hearts. Are you able to do that? In a certain sense, yes – this is referred to as causal ability. Since you are able to pick any of the face-down cards, there are 52 actions available to you, and one of these guarantees that you turn over the Queen of Hearts. However, you do not know which of those 52 actions actually guarantees the result. Therefore, you are not able to turn over the Queen of Hearts in the epistemic sense. I explore this epistemic qualification of ability and three ways of modelling it.
I show that both the analyses of knowing how in epistemic transition systems (Naumov and Tao, 2018) and of epistemic ability in labelled STIT models (Horty and Pacuit, 2017) can be simulated using a combination of impersonal possibility, knowledge and agency in standard epistemic STIT models. Moreover, the standard analysis of the epistemic qualification of ability relies on action types – as opposed to action tokens – and states that an agent has the epistemic ability to do something if and only if there is an action type available to her that she knows guarantees it. I argue, however, that these action types are dispensable. This is supported by the fact that both epistemic transition systems and labelled STIT models rely on action types, yet their associated standard epistemic STIT models do not. Thus, no action types, no labels, and no new modalities are needed.
Epistemic transition systems as well as labelled STIT models have been noticeably influenced by the semantics of ATL. In line with the ATL tradition, they model imperfect information using an epistemic indistinguishability relation on static states or moments, respectively. In the STIT framework this implies that agents cannot know more about the current moment/history pair than what is historically settled. In particular, they cannot know anything about the action they perform at that moment/history pair. This is at odds with the standard epistemic extension of STIT theory which models epistemic indistinguishability on moment/history pairs instead.
The main benefit of using the standard epistemic STIT models instead of epistemic transition systems or labelled STIT models is that they are more general and therefore provide a more general analysis of knowing how and of epistemic ability in terms of the notion of knowingly doing.
References
Horty, J. F. and E. Pacuit (2017). Action types in stit semantics. The Review of Symbolic Logic 10(4), 617–637.
Naumov, P. and J. Tao (2018). Together we know how to achieve: An epistemic logic of know-how. Artificial Intelligence 262(September), 279–300.
ABSTRACT. In stit logic, every agent is endowed at every moment with a set of available choices. Agency is then characterized by two fundamental features. That is, (i) independence of agency: agents can select any available choice and something will happen, no matter what the other agents choose; (ii) dependence of outcomes: the outcomes of agents' choices depend on the choices of the other agents. In this framework, an agent sees to it that F when her choice ensures that F, no matter what the other agents do.
This characterization (or variants thereof) is taken to capture the fact that an agent brings about F. However, the notion of bringing about thus modelled is too demanding to represent situations in which an individual, interacting with others, brings about a certain fact: actually, in most of the cases in which someone brings something about, what the other agents do matters.
In light of this, we aim at refining stit semantic in order to make it suitable to represent the causal connection between actions and their consequences. The key idea is, first, to supplement stit semantics with action types (following Broersen, 2011; Herzig & Lorini, 2010; Horty & Pacuit, 2017; Ming Xu, 2010); then, to introduce a new relation of opposition between action types. We proceed as follows.
Step 1. Let (Mom, <) be a tree-like and discrete ordered set of moments and call “transition” any pair (m,m') such that m' is a successor of m. Given a finite set Ag of agents, we have, for each i in Ag, a set A_i of action types available to i and a labelling function Act_i assigning to each transition an action type available to i, so that Act_i((m,m')) is the action that i performs along transition (m,m'). The joint action performed by a group I of agents along (m,m') is then the conjunction of the actions performed by the agents in I along (m,m'). The joint actions performed by Ag are called global actions, or strategy profiles. In this framework, the next-stit operator [i xstit]F can be given a straightforward interpretation.
Step 2. Intuitively, an individual or joint action B opposes another individual or joint action when B blocks or hinders it (e.g. my action of running to catch a train is opposed by the crowd's standing in the way). In order to represent this relation, we introduce a function O associating to every action B the set O(B) of actions opposing B. We then say that B is unopposed in a global action G just in case B occurs in G and no action constituting G opposes B. The global actions in which B is unopposed represent counterfactual scenarios allowing us to determine the expected causal consequences of B. Specifically, we can say that F is an expected effect of B only if B leads to an F-state whenever it is done unopposed.
Besides presenting an axiomatization of the logic induced by the semantics just sketched, we show that the next-stit operator [i xstit]F is a special case of a novel operator [i pxstit]F, defined in terms of expected effects, and that, by using this operator, we are able to fruitfully analyse interesting case studies. We then assess various refinements of the [i pxstit] operator already available in this basic setting. Finally, we indicate how this setting can be further elaborated by including the goals with which actions are performed.
ABSTRACT. In a wide-ranging interview (Popper, Lindahl, & Århem 1993) published near the end of his life, Popper drew attention to several similarities between the unconscious mind and forces or fields of forces: minds, like forces, have intensity; they are located in space and time; they are unextended in space but extended in time; they are incorporeal but existing only in the presence of bodies; and they are capable of acting on and being acted on by bodies (what he elsewhere called ‘kicking and being kicked back’, and proposed as a criterion of reality). Granted these similarities, Popper proposed that the unconscious mind should be understood literally as a field of forces. A related idea, extending also to the conscious part of the mind, was proposed also by Libet (1996), who elucidated in his (1997) the connections between what he described as ‘Popper's valuable hypothesis’ and his own.
In this comment on the lead paper ‘Popper on the Mind-Brain Problem’, I hope to explore some similarities between these theories of minds as force fields and the proposal that the propensities that are fundamental to Popper's propensity interpretation of probability should be likened to forces. This latter proposal was made indirectly in one of Popper's earliest publications on the propensity interpretation, but never (as far as I am aware) very decisively pursued. Instead, in A World of Propensities (1990), Popper adopted the idea that propensities (which are measured by probabilities) be likened to partial or indeterministic causes. It will be maintained that this was a wrong turn, and that propensities are better seen as indeterministic forces. There is nothing necessitarian, and there is nothing intrinsically unobservable either, about forces.
One of Popper's abiding concerns was the problem of how to account for human creativity, especially intellectual and artistic creativity. The speaker rightly notes the centrality of ‘the Popperian thesis that present-day physics is fundamentally incomplete, i.e. the universe is open’. But this is hardly enough. It is not hard to understand how propensities may be extinguished (that is, reduced to zero) with the passage of time, but harder to understand their initiation and generation. It may be that the identification of propensities with forces, which disappear when equilibrium is achieved and are at once revived when equilibrium is upset, may help to shed some light on this problem.
References
Libet, B. (1996). ‘Conscious Mind as a Field’. Journal of Theoretical Biology 178, pp.223f.
------------(1997). ‘Conscious Mind as a Force Field: A Reply to Lindahl & Århem’. Journal of Theoretical Biology 185, pp.137f.
Popper, K.R. (1990). A World of Propensities. Bristol: Thoemmes.
Popper, K.R., Lindahl, B.I.B. & Århem, P. (1993). ‘A Discussion of the Mind-Brain Problem’. Theoretical Medicine 14, pp.167‒180.
The rehabilitation of Karl Popper’s views of evolutionary biology and the agency of organisms
ABSTRACT. In 1986 Karl Popper gave the Medawar Lecture at The Royal Society in London. He deeply shocked his audience, and subsequently entered into prolonged correspondence with Max Perutz over the question whether biology could be reduced to chemistry. Perutz was insistent that it could be, Popper was equally insistent that it could not be. The lecture was never published by The Royal Society but it has now been made public with the publication of Hans-Joachim Niemann’s (2014) book. Popper contrasted what he called “passive Darwinism” (essentially the neo-Darwinist Modern Synthesis) with “active Darwinism” (based on the active agency of organisms). This was a classic clash between reductionist views of biology that exclude teleology and intentionality and those that see these features of the behaviour of organisms as central in what Patrick Bateson (2017) calls “the adaptability driver”. In the process of investigating how organisms can harness stochasticity in generating functional responses to environmental challenges we developed a theory of choice that reconciles the unpredictability of a free choice with its subsequent rational explanation (Noble and D Noble 2018). Popper could not have known the full extent of the way in which organisms harness stochasticity nor how deeply this affects the theory of evolution (Noble & D. Noble, 2017), but in almost all other respects he arrived at essentially the same conclusions. Our paper will call for the rehabilitation of Popper’s view of biology. Neo-Darwinists see genetic stochasticity as just the source of variation. We see it as the clay from which the active behaviour of organisms develops and therefore influences the direction of evolution.
Bateson, Patrick. 2017 Behaviour, Development and Evolution; Open Book Publishers: Cambridge, UK.
Niemann, Hans-Joachim. 2014 Karl Popper and The Two New Secrets of Life. Mohr Siebeck, Tubingen
Noble, Raymond & Noble, Denis. 2017 Was the watchmaker blind? Or was she one- eyed?,” Biology 6(4), 47
Noble, Raymond & Noble, Denis. 2018 Harnessing stochasticity: How do organisms make choices? Chaos, 28, 106309.
12:00
Philip Madgwick (Milner Centre for Evolution, University of Bath, UK)
Agency in Evolutionary Biology
ABSTRACT. In response to Karl Popper, Denis Noble, Raymond Noble and others who have criticised evolutionary biology’s treatment of the agency of organisms, I analyse and defend what is sometimes called ‘Neo-Darwinism’ or ‘the Modern Synthesis’ from my own perspective – as an active researcher of evolutionary theory. Since the Enlightenment, the natural sciences have made progress by removing agency from nature and understanding the world in terms of materialistic chains of cause and effect. With influence from William Paley, this mechanistic way of thinking became the bedrock of Charles Darwin’s theory of evolution by natural selection. Evolutionary biology has tended to understand the ‘choices’ underlying form and behaviour of organisms as deterministic links in the chain between genotypic causes and phenotypic effects, albeit permitting the genotype to exhibit a range of predetermined responses dependent upon the environmental context (as a generalised form of Richard Woltereck’s reaction norm). As selection acts on phenotypes, there is little room for concepts like ‘free will’ or ‘meaningful choice’ within this form of mechanistic explanation. Instead, agency becomes a useful ‘thinking tool’ rather than a ‘fact of nature’ – a metaphor that can be helpfully applied to biological entities beyond organisms, like genes which can be thought of as ‘selfish’. Whilst there is reasonable grounds to find this world-view aesthetically objectionable, critics like Karl Popper have suggested that evolutionary theory has gone further in (unscientifically) denying the existence of what it cannot explain (namely, agency). Here, I evaluate this line of criticism, highlighting four different aspects of arguments against the concept of agency within modern evolutionary theory: i) issues of language that reflect phrasing rather than semantics, ii) misunderstandings of the significance of biological facts, iii) areas of acknowledged conflict between world views, and iv) unresolved criticisms. To the last point, I present a personal response to demonstrate how I use a working concept of agency to guide my own research.
Mihai Rusu (University of Agricultural Sciences and Veterinary Medicine Cluj-Napoca and Babeş-Bolyai University, Romania) Mihaela Mihai (University of Agricultural Sciences and Veterinary Medicine Cluj-Napoca, Romania)
Modal notions and the counterfactual epistemology of modality
ABSTRACT. The paper discusses a conceptual tension that arises in Williamson's counterfactual epistemology of modality: that between accepting minimal requirements for understanding, on the one hand, and providing a substantial account of modal notions, on the other. While Williamson's theory may have the resources to respond to this criticism, at least prima facie or according to a charitable interpretation, we submit that this difficulty is an instance of a deeper problem that should be addressed by various types of realist theories of metaphysical modality. That is, how much of the content of metaphysical modal notions can be informed through everyday/naturalistic cognitive and linguistic practices? If there is a gap between these practices and the content of our metaphysical modal assertions, as we believe there is, it appears that the (counterfactual) account needs to be supplemented by various principles, rules, tenets, etc. This reflects on the nature and content of philosophical notions, as it seems that one may not be able to endorse an extreme externalist account of philosophical expressions and concepts, of the kind Williamson favours, and at the same time draw out a substantial epistemology of these notions, as a robust interpretation of metaphysical modal truth seems to require.
ABSTRACT. The epistemological status of modalities is one of the central issues of contemporary philosophy of science: by observing the actual world, how can scientists obtain knowledge about what is possible, necessary, contingent, or impossible. It is often thought that a satisfactory answer to this puzzle requires making non-trivial metaphysical commitments, such as grounding modal knowledge on essences or being committed to forms of modal realism. But this seems to put the cart before the horse, for it assumes that in order to know such ordinary modal facts as “it is possible to break a teacup” or such scientific modal facts as “superluminal signaling is impossible”, we should first have a clear metaphysical account of the relevant aspects of the world. It seems clear to us that we do have such everyday and scientific knowledge, but less clear that we have any kind of metaphysical knowledge. So, rather than starting with metaphysical questions, we offer a metaphysically neutral account of how modal knowledge is gained that nevertheless gives a satisfactory description of the way modal beliefs are formulated in science and everyday life.
We begin by explicating two metaphysically neutral means for achieving modal knowledge. The first, a priori way is founded on the idea of relative modality. In relative modality, modal claims are defined and evaluated relative to a system. Claims contradicting what is accepted, fixed or implied in a system are impossible within that system. Respectively, claims that can be accepted within the system without contradiction are possible. Necessary claims in a system are such that their negation would cause a contradiction, and so on. The second, a posteriori way is based on the virtually universally accepted Actuality-to-Possibility Principle. Here, what is observed to be or not to be the case in actuality or under manipulations gives us modal knowledge. Often this also requires making ampliative inferences. The knowledge thus gained is fallible, but the same holds for practically all empirical knowledge.
Based on prevalent scientific practice, we then show that there is an important bridge between these two routes to modal knowledge: Usually, what is kept fixed in a given system, especially in scientific investigation, is informed by what is discovered earlier through manipulations. Embedded in scientific modelling, relative modalities in turn suggest places for future manipulations in the world, leading to an iterative process of modal reasoning and the refinement of modal knowledge.
Finally, as a conclusion, we propose that everything there is to know about modalities in science and in everyday life can be accessed through these two ways (or their combination). No additional metaphysical story is needed for the epistemology of modalities – or if such a story is required, then the onus of proof lies on the metaphysician. Ultimately, relative modality can accommodate even metaphysical modal claims. However, they will be seen as claims simply about systems and thus not inevitably about reality. While some metaphysicians might bite the bullet, few have been ready to do so explicitly in the existing literature.
ABSTRACT. Philosophical questions arise for all teachers. Some of these arise at an individual teacher/student level (what is and is not appropriate discipline?); some at a classroom level (what should be the aim of maths instruction?); some at a school level (should classes be organised on mixed-ability or graded-ability lines?); and some at a system level (should governments fund private schooling, and if so on what basis?).
These philosophical, normative, non-empirical questions impinge equally on all teachers, whether they are teaching mathematics, music, economics, history, literature, theology or anything else in an institutional setting.
The foregoing questions and engagements belong to what can be called general philosophy of education; a subject with a long and distinguished past, contributed to by a roll-call of well-known philosophers and educators such as: Plato, Aristotle, Aquinas, Locke, Mill, Whitehead, Russell, Dewey, Peters, Hirst and Scheffler (to name just a Western First XI).
But as well as general philosophy of education, there is a need for disciplinary philosophy of education; and for science education such philosophy is dependent upon the history and philosophy of science. Some of the disciplinary questions are internal to teaching the subject, and might be called ‘philosophy for science teaching’. This covers the following kinds of questions: Is there a singular scientific method? What is the scope of science? What is a scientific explanation? Can observational statements be separated from theoretical statements? Do experimental results bear inductively, deductively or abductively upon hypotheses being tested? What are legitimate and illegitimate ways to rescue theories from contrary evidence?
Other of the disciplinary questions are external to the subject, and might be called ‘philosophy of science teaching’. Here questions might be: Can science be justified as a compulsory school subject? What characterises scientific ‘habits of mind’ or ‘scientific temper’? How might competing claims of science and religion be reconciled? Should local or indigenous knowledge be taught in place of orthodox science or alongside it, or not taught at all? Doubtless the same kinds of questions arise for teachers of other subjects – mathematics, economics, music, art, religion.
There are many reasons why study of history and philosophy of science should be part of preservice and in-service science teacher education programs. Increasingly school science courses address historical, philosophical, ethical and cultural issues occasioned by science. Teachers of such curricula obviously need knowledge of HPS. Without such knowledge they either present truncated and partial versions of the curricula, or they repeat shallow academic hearsay about the topics mentioned. Either way their students are done a disservice. But even where curricula do not include such ‘nature of science’ sections, HPS can contribute to more interesting and critical teaching of the curricular content.
Beyond these ‘practical’ arguments for HPS in teacher education, there are compelling ‘professional’ arguments. A teacher ought to know more than just what he or she teaches. As an educator, they need to know something about the body of knowledge they are teaching, something about how this knowledge has come about, how its claims are justified, what its limitations are and, importantly, what the strengths and contributions of science have been to the betterment of human understanding and life. Teachers should have an appreciation of, and value, the tradition of inquiry into which they are initiating students. HPS fosters this.
Non-Causal Explanations of Natural Phenomena and Naturalism
ABSTRACT. The aim of this paper is to assess whether a counterfactual account of mathematical explanations of natural phenomena (MENP) (Baker 2009) is compatible with a naturalist stance. Indeed, nowadays many philosophers claim that non-causal explanations of natural phenomena are ubiquitous in science and try to provide a unified account of both causal and non-causal scientific explanations (Reutlinger, Saatsi 2018). Among the different kinds of non-causal explanations of natural phenomena, MENP are regarded as paradigmatic examples of non-causal scientific explanations (Lange 2013). According to many philosophers, among the unified accounts of scientific explanations that have been proposed so far, the most promising ones are those that try to extend the counterfactual theory of scientific explanations to cover non-causal scientific explanations (Reutlinger 2018). We thus focus on Baron, Colyvan and Ripley (2017) (BCR), since it is one of the most well-developed attempts to provide an account of MENP that is based on a counterfactual theory of scientific explanations. More precisely, we examine BCR counterfactual account of why the shape of honeycomb cells is hexagonal. Such account rests on the idea that through a counterfactual about mathematics, one can illuminate the reason why the shape of the cells cannot but meet an optimality requirement. We firstly analyse whether BCR account is an adequate explanation of cells’ shape, and then we assess whether such account would be acceptable to those who wish to adopt a naturalist stance. To do that, we specify what minimal requirements a stance has to meet in order to be defined as naturalist. We show that BCR account of the shape of honeycomb cells is unsatisfactory, because it is focused on the bidimensional shape of the cells, while actual cells are tridimensional, and the tridimensional shape of the cells does not meet any optimality requirement (Räz 2013). We also show that it might be in any case very difficult to make BCR account compatible with a naturalist stance, because of its metaphysical assumptions on how mathematics might constrain the physical domain. We claim that such a kind of “explanations by constraint” (Bertrand 2018; Lange 2013) is incompatible with a naturalist stance, because there is no naturalist account of how such a constrain might obtain.
References
Baker A. 2009. Mathematical Explanation in Science. British Journal for the Philosophy of Science, 60: 611–633.
Baron S., Colyvan M., Ripley D. 2017. How Mathematics Can Make a Difference. Philosophers’ Imprint, 17: 1–19.
Bertrand M. 2018. Metaphysical Explanation by Constraint. Erkenntnis, DOI: 10.1007/s10670-018-0009-5.
Lange M. 2013. What Makes a Scientific Explanation Distinctively Mathematical? British Journal for the Philosophy of Science, 64: 485–511.
Räz T. (2013). On the Application of the Honeycomb Conjecture to the Bee’s Honeycomb. Philosophia Mathematica, 21: 351–360.
Reutlinger A. 2018, Extending the Counterfactual Theory of Explanation, in A. Reutlinger, J. Saatsi (eds.), Explanation beyond Causation. Oxford: Oxford University Press: 74–95.
Reutlinger A., Saatsi J. (eds.) 2018. Explanation beyond Causation. Oxford: Oxford University Press.
ABSTRACT. Elgin has argued that scientific understanding is, in general, non-factive because it often partly consists in idealizations (“felicitous falsehoods”). In contrast, Strevens argues that idealizations can be eliminated from models by which we understand phenomena of interest, and hence that understanding is “correct,” or quasi-factive. In contrast to both, I argue that the factivity debate cannot be settled, as a matter of principle.
The factivity debate concerns whether felicitous falsehoods can ever constitute our understanding. Elgin (2004, pp. 113-114) cites “the laws, models, idealizations, and approximations which are... constitutive of the understanding that science delivers.” Yet, as Strevens notes, the evidence Elgin adduces for non-factivity is consistent with idealizations falling short of constituting understanding.
In contrast, for Strevens (2013, p. 505), to understand why something is the case is to “grasp a correct explanation” of it. For Strevens, explanation is model-based, hence so is the understanding that explanation provides. The role of idealizations is heuristic: to provide simplified models that preserve factors that causally and counterfactually make a difference to the phenomena theorized. Strevens (2013, p. 512) distinguishes the explanatory and literal contents of idealized models. The literal content of the model includes idealizations and their consequences. We obtain its explanatory content by devising a translation manual that eliminates idealizing assumptions and replaces them by conditional statements that are actually true. Understanding is correct (quasi-factive) to the extent that the explanatory content of the model by which we understand is accurate.
I now move to my own contribution. In appraising the debate about whether understanding is factive, we should differentiate between our conceptions – the stuff of thought – and the cultural artifacts we use as props for thinking: our models and theories. When many alternative models of the same phenomena are available, some models are more “cognitively salient” than others (Ylikoski and Kourikoski 2010). Subjectively, they come to mind more easily; objectively, their easier access is due to their greater explanatory power.
With the theory/ mind difference in view, I distinguish two questions: (i) whether idealizations are constitutive to the models scientists use; and (ii) whether idealizations are constitutive of the cognitive representations by which scientists understand. If there's nothing more we can say about the cognitive aspects of understanding, then we lack a procedure for finding which parts of a model are internalized as cognitive representations.
This matters for the factivity of understanding: we have no way of telling whether idealizations (be they in-principle eliminable or not) are in fact cognitively represented by scientists conceiving of the phenomena thus idealized. That is, we have no basis to settle the issue of whether understanding is quasi-factive or non-factive.
References:
Elgin, C.Z. (2004) True Enough. Philosophical Issues 14, pp. 113-131.
Strevens, M. (2013) No understanding without explanation. Studies in History and Philosophy of Science 44, pp. 510-515.
Ylikoski, P., & Kourikoski, J. (2010) Dissecting explanatory power. Philosophical Studies 148, pp. 201-219.
Idealizations and the decomposability of models in science
ABSTRACT. Idealizations are a central part of many scientific models. Even if a model represents its target system in an accurate way, the model will not replicate the whole target system but will only represent relevant features and ignore features that are not significant in a specific explanatory context. Conversely, in some cases not all features of a model will have a representative function. One common strategy to account for these forms of idealizations is to argue that idealizations can have a positive function if they do not distort the difference-makers of the target system. (Strevens 2009, 2017)
This view about the role of idealized models has recently been challenged by Collin Rice. (Rice 2017, 2018) He claims that the strategy to account for idealizations in terms of a division between representative parts of a model and the parts which can be ignored fails for several reasons. According to him idealizations are essential for the mathematical framework in which models can be constructed. This idealized mathematical framework is, in turn, a necessary precondition to create and understand the model and undermines our ability to divide distorted from representative features of a model. His second reason for doubting the adequacy of the strategy to divide between relevant and not relevant model parts is the fact that many models distort difference-making features of the target system. Alternatively, he suggests a position he calls the holistic distortion view of idealized models. This position includes the commitment that highly idealized models allow the scientist to discover counterfactual dependencies without representing the entities, processes or difference-makers of their target systems. In my presentation I am going to argue against this position and claim that to explain and to show causal and non-causal counterfactual dependence relations with the help of a model is only possible if the model accurately represents the difference-makers within a target system. I will do this by explicating the notion of a mathematical framework of a model which Rice’s argument heavily depends upon and reevaluate some of his examples of idealized models used in scientific practice like the Hardy–Weinberg equilibrium model in biology and the use of the thermodynamic limit in physics.
Literature
Rice, Collin. “Idealized Models, Holistic Distortions, and Universality.” Synthese 195, no. 6 (June 2018): 2795–2819. https://doi.org/10.1007/s11229-017-1357-4.
Rice, Collin. “Models Don’t Decompose That Way: A Holistic View of Idealized Models.” The British Journal for the Philosophy of Science, August 30, 2017, axx045–axx045. https://doi.org/10.1093/bjps/axx045.
Strevens, Michael. “How Idealizations Provide Understanding.” In Explaining Understanding: New Essays in Epistemology and the Philosophy of Science, edited by Stephen R. Grimm, Christoph Baumberger, and Sabine Ammon, 37–49. New York: Routledge, 2017.
Strevens, Michael. Depth: An Account of Scientific Explanation. Cambridge, MA: Harvard University Press, 2008.
In this paper, I diagnose that evolutionary game theory models are used in multiple diverse ways and for different purposes, either directly or indirectly contributing toward the generation of acceptable scientific explanations. The philosophical literature on modelling, rather than recognizing this diversity, attempts to fit all of these into a single narrow account of modelling, often only focusing on the analysis of a particular model. Recently, Cailin O’Connor and James Owen Weatherall (2016) argued that a lack of family resemblance between modelling practices makes an understanding of the term ‘model’ impossible, suggesting that “any successful analysis [of models] must focus on sets of models and modelling practice that hang together in ways relevant for the analysis at hand” (p. 11). Rather than providing an essentialist account of what scientific modelling practice is or should be, covering all the different ways scientists use the word ‘model’, I settle for something far less ambitious: a philosophical analysis of how models can explain real-world phenomena that is narrow in that it focuses on Evolutionary Game Theory (EGT) and broad in its analysis of the pluralistic ways EGT models can contribute to explanations. Overly ambitious accounts have attempted to provide a philosophical account of scientific modelling that tend to be too narrow in their analysis of singular models or small set of models and too broad in their goal to generalize their conclusions over the whole set of scientific models and modelling practices – a feat that may, in fact, be impossible to achieve and resemble Icarus who flew too close to the sun. Nevertheless, many of the conclusions in my analysis will be extendable to other sets of models, especially in biology and economics, but doubt must be cast that any essence of models can be discovered.
References
O’Connor, C., and J. O. Weatherall. 2016. ‘Black Holes, Black-Scholes, and Prairie Voles: An Essay Review of Simulation and Similarity, by Michael Weisberg’, Philosophy of Science, 83, pp. 613–26.
ABSTRACT. The fact that some objects interact well together – say, a function with an argument in its domain of definition, whose interaction produce a result – define a notion of duality that has been central in the last-century mathematics. Not only does it provide a general framework for considering at the same time objects of interests and tests (or measures) on them, but it also provides a way to both enrich and restrict the objects considered, by studying a relaxed or strengthened notion of interaction.
A reconstruction of logic around the notion of interaction have been underway since the pi- oneering works of Krivine and Girard where (para-)proofs are seen as interacting by exchanging logical arguments, the interaction stopping successfully only if one the two gives up as it recognises to lack arguments. All the proofs interacting in a certain way – for instance, interacting correctly with the same proof – can then be seen as embodying a certain formula; and the possible operations on proofs translate into operations on formulæ.
In this work, we intend to show that, somewhat surprisingly, the same approach in terms of duality and interaction succeeds in grasping structural aspects of natural language as purely emergent properties. Starting from the unsupervised segmentation of an unannotated linguistic corpus, we observe that co-occurrence of linguistic segments at any level (character, word, phrase) can be considered as a successful interaction, defining a notion of duality between terms. We then proceed to represent those terms by the distribution of their duals within the corpus and define the type of the former through a relation of bi-diuality with respect to all the other terms of the corpus. The notion of type can then be refined by considering the interaction of a type with other types, thus creating the starting point of a variant of Lambek calculus.
This approach has several precursors, for instance Hjelmslev’s glossematic algebra, and more generally, the structuralist theory of natural language (Saussure, Harris). The formal version we propose in this work reveals an original relation between those perspectives and one of the most promising trends in contemporary logic. We also include an implementation of the described algorithm for the analysis of natural language. Accordingly, our approach appears as a way of analyzing many efficient mechanized natural language processing methods. More generally, this approach opens new perspectives to reassess the relation between logic and natural language.
Bibliography.
Gastaldi, Juan-Luis. “Why can computers understand natural language?” Under review in Philosophy and Technology.
Girard, Jean-Yves. “Locus solum: from the rules of logic to the logic of rules”. In: Mathematical Structures in Computer Science 11.3 (2001), pp. 301–506.
Hjelmslev, Louis and Hans Jørgen Uldall. Outline of Glossematics. English. Nordisk Sprog- og Kulturforlag. Copenhague, 1957.
Krivine, Jean-Louis. “Realizability in classical logic”. In: Panoramas et synthèses 27 (2009), pp. 197–229.
Lambek, Joachim. “The Mathematics of Sentence Structure”. In: The American Mathemat- ical Monthly 65.3 (Mar. 1958), pp. 154–170.
11:30
Valeria Giardino (Archives Henri-Poincaré - Philosophie et Recherches sur les Sciences et les Technologies, France)
The practice of proving a theorem: from conversations to demonstrations
ABSTRACT. In this talk, I will focus on mathematical proofs “in practice” and introduce as an illustration a proof of the equivalence of two presentations of the Poincaré homology sphere, which is taken from a popular graduate textbook (Rolfsen, 1976) and discussed in De Toffoli and Giardino (2015). This proof is interesting because it is given by showing a sequence of pictures and explaining in the text the actions that ought to be performed on them to move from one picture to the other and reach the conclusion. By relying on this example, I will propose to take into account Stone and Stonjic (2015)’s view of demonstrations as practical actions to communicate precise ideas; my objective is to evaluate whether such a suggestion can be of help to define what the mathematical “practice” of giving a poof is. Stone and Stonjic consider as a case study an “origami proof” of the Pythagorean theorem and base their analysis on certain aspects of the philosophy of language of David Lewis. According to Lewis (1979), communication naturally involves coordination; in principle, any action could be a signal of any meaning, as long as the agent and her audience expect the signal to be used that way; a conversation happens only when a coordination problem is solved. Formal reasoning is a particular form of coordination that happens on a conversational scoreboard, that is, an abstract record of the symbolic information that interlocutors need to track in conversation. Stone and Stonjic conclude that the role of practical action in a conversation is explained in terms of coherence relations: meaning depends on a special sort of knowledge— convention—that serves to associate practical actions with precise contributions to conversation; interpretive reasoning requires us to integrate this conventional knowledge—across modalities—to come up with an overarching consistent pattern of contributions to conversation. On this basis, I will discuss the pros of considering proofs as conversations: if this is the case, then non-linguistic representations like diagrams have content and mathematics is a distributed cognitive activity, since transformations in the world can be meaningful. However, some general problems arise in Lewis’ framework when applied to mathematical proof: (i) does a convention of truthfulness and trust exist really?; (ii) how can we coordinate and update our conversational scoreboard table when we read a written demonstration? The interest of the talk will be to investigate the possibility of a link between philosophy of the mathematical practice and philosophy of language.
References
De Toffoli, S. and Giardino, V. (2015). An Inquiry into the Practice of Proving in Low-Dimensional Topology. Boston Studies in the Philosophy and History of Science, 308, 315-336.
Lewis, D. K. (1979). Scorekeeping in a language game. Journal of Philosophical Logic, 8, 339–359.
Rolfsen, D. 1976. Knots and links. Berkeley: Publish or Perish.
Stone, M and Stojnic, U. (2015). Meaning and demonstration. Review of Philosophy and Psychology (special issue on pictorial and diagrammatic representation), 6(1), 69-97.
An Attempt at Extending the Scope of Meaningfulness in Dummett's Theory of Meaning.
ABSTRACT. Michael Dummett proposed a radically new approach to the problem how the philosophical foundations of a meaning theory of a natural language are to be established. His central point is threefold. First, a theory of meaning should give an account of the knowledge (i.e., understanding) that competent speakers of the language have of it. Second, the knowledge consists in certain practical abilities. If someone counts as a competent speaker, it is because, by using the language, she/he is able to do anything that all and only those who understand the language can do. Therefore what a theory of meaning should account for is those practical abilities that a competent speaker. Then, what do those practical abilities consist in? This question leads us to Dummett’s third point. Ordinarily, one is entitled to possess some ability by exhibiting (making manifest the possession of ) the ability. : i.e., by having done, often doing or being likely to do something that can be done by virtue of the ability. Truly, there is an intricate problem of what one should do to be entitled to possess the ability. Let us set aside the problem. Dummett tackled with another (related but more profound) problem: in almost all natural languages and formalized languages, there are various sentences that are, while well-formed and hence associated with certain precise conditions for them to be true, definitely beyond the scope of possible exhibition of those abilities that (if there were any at all) the understanding of the sentences would consist in.
He objected to the common opinion that meaning of a sentence could be equated with its truth-conditions and instead claimed that the meaning should be accounted for as consisting in its (constructive) provability condition, that is, according to Dummett someone knows the meaning of a sentence just in case he knows what has to be done (what construction has to be realized) to justify the sentence (i.e. to establish constructively the sentence holds.)
I basically agree with these lines of Dummett’s thought, although I should point out that his view on the scope of meaningfulness (intelligibility) of sentence is too restrictive. Dummett proposes that in giving provability conditions of a sentence we should adopt the intuitionistic meaning condition of the logical connectives. The reason is that the intuitionistic connectives are conservative with respect to constructivity: If sentences derived intuitionistically from some assumptions, then the sentence is constructively justifiable provided those assumptions are. However, I think we can point out there are some sentences that are while beyond the criterion, that can be established by virtue of an agent’s behavior that conclusively justifies the sentence. In that case the agent’s behavior could be said to make her understanding of sentence manifest. One of the typical examples of such sentences is, one might say certain kind of infinitary disjunctions that are treated prominently by the proponents of the geometric logic such as S.Vickers. I will investigate into the matter more closely in the talk.
Two types of unrealistic models: programatic and prospective
ABSTRACT. The purpose of this paper is to introduce and assess a distinction among unrealistic models based on the kind of idealizations they resort to. On the one hand, programatic models resort to idealizations that align with the core commitments of a research program. On the other hand, prospective models resort to idealizations that challenge those core commitments. Importantly, unrealistic models are not intrinsically programatic or prospective. Rather, their status is dependent on an interpretation of the idealizations.
Idealizations are features of models that make them different from - typically simpler than - the target phenomena they represent. However, these features become idealizations only after two stages of interpretation performed by the user of the model. First, there is a non-referential interpretation of the model’s vehicle. In this stage, the user decides which features instantiated by the vehicle are those that the model is going to exemplify. These features are conceptualised in accordance with the contingent commitments of the user. These features are the bearers of the idealizations-to-be. Second, there is a referential interpretation of the features exemplified by the model. In this stage, the user assigns features exemplified by the model to features of the target phenomenon. In the assignment, exemplified features of the model are evaluated as more or less idealized representations of their denotata. Such evaluation is decided by standards and other epistemic commitments held by the user of the model.
Idealizations, as the product of a user’s interpretation, can align with or challenge core commitments of research programs in both stages of interpretation. First, non-referential interpretations can conflict with accepted selections of features that a model exemplifies or with the accepted conceptualizations of such features. Particularly salient are explanatory commitments, which can decide which conceptualizations are legitimate within a research program. Second, referential interpretations can conflict with accepted standards for assignment. Explanatory commitments are also relevant in deciding these standards.
I continue to argue that programatic and prospective models typically aim for distinct epistemic achievements. On the one hand, programatic models aim for how-plausibly explanations, while prospective models aim for how-possibly explanations. However, I contend that how-plausibly and how-possibly explanations should not be regarded as explanations in the traditional sense, but rather as distinct forms of understanding. Thus, programatic and prospective models share a common, although nuanced, aim: the advancement of understanding.
I test this account in a model case study (Olami, Feder, Christensen, 1992). This model is a cellular automaton computer model that simulates aspects of the behaviour of earthquakes. I show how different explanatory commitments, namely mechanistic and mathematical explanatory commitments, align with and challenge core commitments of distinct research programs. I also explore how these commitments lead to distinct understandings of the target phenomenon.
References
Olami, Z., Feder, H.J.S. & Christensen, K. 1992. Self-Organized Criticality in a Continuous, Nonconservative Cellular Automaton Modeling Earthquakes.
A Dynamic Neo-Realism as an Active Epistemology for Science
ABSTRACT. In this talk I defend a dynamic epistemic neo-realism (DEN). One important difference between DEN and the traditional no-miracles account of realism is that the determining factors for epistemic commitment to science (i.e. belief in empirical and theoretical knowledge) lie in the active nature of the processes of science as opposed to NMR’s focus on the logical properties of science’s products.
I will focus on two factors of DEN in my discussion. First, I propose an explosion of the dichotomy between realism and anti-realism ruling the current realist debate. (See critiques of this dichotomy in various forms in e.g. McMulllin (1984), Stein (1989), and Kukla (1994).) The explosion I suggest results in a continuum of (neo-) realist stances towards the epistemic content of theories which rests on two motivations: (1) Depicting epistemic commitment to science in terms of a dichotomy between anti-realism and realism is inadequate, as, given the trial-and-error nature of science, most of science happens on a continuum between these stances. (2) Epistemic commitment to science need not (primarily) depend on the (metaphysical) truth of science’s ontological claims, but is better determined on pluralist, functional, and pragmatic grounds. This position is not the same as Arthur Fine’s (1984) natural ontological attitude. I advocate a continuum of epistemic (‘neo-realist’) stances as opposed to one ‘core’ one. Rather than imploding the realist/anti-realist dichotomy, I differentiate and refine it into a continuum of epistemic commitment.
Secondly – and the main focus of this talk – I offer a critical reconsideration of the three traditional - metaphysical, semantic and epistemic - tenets of traditional scientific realism from the perspective of DEN. Specifically the traditional versions of the semantic and epistemic tenets have to be re-interpreted in the light of the suggested continuum of neo-realist stances. Reference has to be re-conceptualised as an epistemic tracking device and not only (or at all, perhaps) as an indicator of ontological existence, while the concept of truth has to be ‘functionalised’ in Peirce’s (1955) sense of truth-as-method with some adjustments to a traditional convergent view of science.
In conclusion, the account of neo-realism defended here is a fallibilist epistemology that is pragmatist in its deployment of truth and reference and pluralist in its method of evaluation. In its naturalised tracing of science, it explains the progress of science as the result of intensive time-and-context-indexed science-world interaction.
Bibliography
Fine, Arthur. 1984. The Natural Ontological Attitude. In Scientific Realism, J. Leplin (ed.), 261–277. Berkeley: University of California Press.
Kukla, André. 1994. Scientific Realism, Scientific Practice, and the Natural Ontological Attitude. British Journal for the Philosophy of Science 45: 955–975.
McMullin, Ernan. 1984. A Case for Scientific Realism. In Scientific Realism, J. Leplin (ed.), 8–40. Berkeley: University of California Press.
Peirce, Charles, S. 1955. The Scientific Attitude and Fallibilism. In Philosophical Writings of Peirce, J. Buchler (ed.), 42–59. New York: Dover Publications.
Stein, Howard. 1989. Yes, but … - Some Skeptical Remarks on Realism and Anti-Realism. Dialectica 43(1/2): 47–65.
Expected Utility, Inductive Risk, and the Consequences of P-Hacking
ABSTRACT. P-hacking is the manipulation of research methods and data to acquire statistically significant results. It includes the direct manipulation of data and/or opportunistic analytic tactics. Direct manipulation involves experimental strategies such as dropping participants whose responses to drugs would weaken associations; redefining trial parameters to strengthen associations; or selectively reporting on experimental results to obtain strong correlations. Opportunistic analytic tactics include performing multiple analyses on a set of data or performing multiple subgroup analyses and combining results until statistical significance is achieved.
P-hacking is typically held to be epistemically questionable, and thus practically harmful. This view, which I refer to as the prevalent position, typically stresses that since p-hacking increases false-positive report rates, its regular practice, particularly in psychology and medicine, could lead to policies and recommendations based on false findings. My first goal in this paper is to formulate the prevalent position using expected utility theory. I express a hypothetical case of p-hacking in medical research as a decision problem, and appeal to existing philosophical work on false-positive report rates as well as general intuitions regarding the value of true-positive results versus false-positive ones, to illustrate the precise conditions under which p-hacking is considered practically harmful. In doing so, I show that the prevalent position is plausible if and only if (a) p-hacking increases the chance that an acquired positive result is false and (b) a true-positive result is more practically valuable than a false-positive one.
In contrast to the prevalent position, some claim that experimental methods which constitute p-hacking do play a legitimate role in medical research methodology. For example, analytic methods which amount to p-hacking are a staple of exploratory research and have sometimes led to important scientific discoveries in standard hypothesis testing. My second aim is to bring the prevalent position into question. I argue that although it is usually the case that refraining from p-hacking entails more desirable practical consequences, there are conditions under which p-hacking is not as practically perilous as we might think. I use the formal resources from expected utility theory from the first part of the paper, and lessons learned from the arguments surrounding inductive risk to articulate the conditions under which this is the case. More specifically, I argue that there are hypotheses for which p-hacking is not as practically harmful as we might think.
11:30
Renata Arruda (Universidade Federal de Goiás, Brazil)
Multicausality and Manipulation in Medicine
ABSTRACT. The objectivity of causality in its observable aspects is generally characterized by the reference to the concrete alteration of the effects due to the alteration in a cause. One of the ways of making a causal relationship takes place is precisely by human intervention in the factor that is considered the cause. This type of deliberate intervention, which an agent can produce with manipulable factors, is absolutely intrinsic to medicine. My interest here is to present how medicine, as a practical science, articulates the multiple factors and phenomena that act on an organism in order to understand cause and effect relationships. To that end, I associate JL Mackie's and Kenneth Rothman's theories about the necessary and sufficient conditions for the cause-effect relation to the theory of manipulability. This theory, in general, identifies the causal relation as that in which some kind of intervention in the cause gives rise to the effect. Medical science is distinguished exactly by the practices it performs, without which it would lose its own meaning. In this way, medicine is one of the sciences in which the relation between cause and effect can be evaluated objectively.
Despite these observable aspects, a problem rises. Faced with the complexity of an organism, where several factors act together to produce an effect, how to delimit the cause on which to intervene? The proper functioning of the organism is not based on the functioning of isolated causes. If, on the one hand, the analysis of causality from a singularist perspective in sciences like medicine is impracticable, on the other hand, this analysis becomes more complicated if we add the fact that some physiological mechanisms are absolutely unknown. That is to say, in treating the organism, medicine depends fundamentally on intervention in cause-effect relationships, in a complex system with some mechanisms that are not absolutely clear. Nonetheless all these difficulties, medicine is recognized for succeeding in the various activities that concern it. In this context, both Mackie's and Rothman's conceptions on cause-effect relationship helps us to understand the role of intervention in medicine and its consequences for the general conception of causality.
References
Cartwright, Nancy. 2007. Hunting Causes and Using Them: Approaches in Philosophy and Economics. Cambridge UP, Cambridge.
Mackie, J. L. 1965. Causes and Conditions. American Philosophical Quarterly 2 (4), p. 245 – 264.
Rothman, Kenneth; Greenland, Sander; Lash, Timothy L; Poole, Charles. 2008. Causation and causal inference. Modern Epidemiology 3º edition. Wolters Kluwer Health/Lippincott Williams & Wilkins, Philadelphia.
Woodward, James. 2013. Causation and Manipulability. The Stanford Encyclopedia of Philosophy (Winter Edition), Edward N. Zalta (ed.), URL = .
Stefan Petkov (Beijing Normal University School of Philosophy, China)
Scientific explanations and partial understanding
ABSTRACT. Notions such as partial or approximate truth are often invoked by proponents of the factual accounts of understanding in order to address the problem of how flawed theories can provide understanding of their factual domain. A common problem of such arguments is that they merely do a lip service to such theories of truth instead of exploring them more fully. This is a perplexing fact, because a central feature of factual accounts is the so called veridical condition. The veridical condition itself appears as a result of a broadly inferential approach to understanding according to which only factually true claims can figure within an explanans capable of generating understanding of its explanandum.
Here I aim at amending this issue by exploring Da Costa’s notion of partial truth and liking it with a factual analysis of understanding. As a result of pursuing such an account several interesting features of explanatory arguments and understanding will emerge.
Firstly, partial truth naturally links with the intuition that understanding comes in degrees. This appears straightforwardly from the fact that an explanation that contains partially true propositions can only provide partial explanatory information for its explanandum.
Secondly the distinction between theoretic and observational terms on which the notion of partial truth relies, permits us to be clear on the problem when will an explanation provide partial understanding. This can be the case only if the explanans has premises which contain theoretic concepts. Only such premises that relate theoretic and observational terms can be taken as partially true (premises that contain descriptive terms only can be simply assessed as true or false). As a result of such partiality the information transfer from premises to conclusion can be only partially factually accurate, which subsequently leads to partial understanding.
The resulting account of understanding then resolves the core problem that modest factual accounts face—namely, that if a partially factual account of understanding is accepted, then this account should also show by what means a partially true proposition figures centrally in an explanatory argument and explain how flawed theories can make a positive difference to understanding.
I will further support my case by a critical examination of predator-prey theory and the explanatory inferences it generates for 2 possible population states – the paradox of enrichment and the paradox of the pesticide. The paradox of the pesticide is the outcome of predator-prey dynamics according to which the introduction of a general pesticide can lead to the increase of the pest specie. The paradox of enrichment is an outcome of predator prey dynamics according to which the increase of resources for the prey species can lead to destabilization. Both of these outcomes depend on idealized conceptualization the functional response within predator-prey models. This idealization can be assessed as introducing a theoretic term. The explanatory inferences using such a notion of functional response can then be judged only as approximately sound (paradox of the pesticide) or unsound (paradox of enrichment) and providing only partial understanding of predator-prey dynamics.
11:30
Daniel Kostic (University Bordeaux Montaigne; Sciences, Philosophie, Humanité (SPH) University of Bordeaux, Bordeaux, France., France)
Facticity of understanding in non-causal explanations
ABSTRACT. In the literature on scientific explanation, understanding has been seen either as a kind of knowledge (Strevens 2008, 2013; Khalifa 20117; De Regt 2015) or as a mental state that is epistemically superfluous (Trout 2008). If understanding is a species of knowledge an important question arises, namely, what makes the knowledge from understanding true?
In causal explanations, the facticity of understanding is conceived in terms of knowing the true causal relations (Strevens 2008, 20013).
However, the issue about facticity is even more conspicuous in non-causal explanations. How to conceive of it in non-causal explanations if they don’t appeal to causal, microphysical or in general ontic details of the target system?
I argue that there are two ways to conceive the facticity of understanding in a particular type of non-causal explanation, i.e. the topological one. It is through understanding “vertical” and “horizontal” counterfactual dependency relations that these explanations describe.
By “vertical”, I mean counterfactual dependency relation which describes dependency between variables at different levels or orders in the mathematical hierarchy.
These are explanatory in virtue of constraining a range of variables in a counter-possible sense, i.e. had the constraining theorem been false it wouldn’t have had constrained the range of object level variables. In this sense, the fact that a meta-variable or a higher order mathematical property holds entails that a mathematical property P obtains in the same class of variables or operations (Huneman 2017: 24).
An example of this approach would be an explanation stability of an ecological community. Species and predation relations between them can be modeled as a graph which can have a global general network property of being a “small-world”. The fact that the small-world property holds for that system constrains various kinds of general properties, e.g. the stability or robustness (Huneman 2017: 29).
On the other hand, by “horizontal” I mean the counterfactual dependency relations that are at the same level or order in the mathematical hierarchy. An example of “horizontal” counterfactual dependencies relations are the ones that hold between the topological variables such as the node’s weighted degree or the network communicability measure and the variables that describe system’s dynamics as a state space.
Factivity in vertical cases is easy to understand, it’s basically a proof or an argument. It can be laxed or stricter. A laxed version is in terms of soundness and validity of the argument, a stricter sense is in terms of grounding (Poggiolesi 2013).
Factivity in horizontal cases is a bit more difficult to pin down. One way would be through possible world analysis of counterfactual dependency relations that the explanations describe. In this sense, the facticity has a stronger form which is germane to the notion of necessity.
The remaining question is whether this account can be generalized beyond topological explanation. I certainly think that at the very least it should be generalizable to all horizontal varieties of non-causal explanations.
The philosophy of history and the methodology of historical knowledge - traditional themes within the framework of continental philosophy. A person, reasoning about history, seeks to clarify his position history, to define his presence in it.
History is not only a reality in which humanity finds itself, understands and interprets itself, but also a professional sphere on the professional sphere of acquiring and transmitting knowledge. In the 20-th century, a kind of «emancipation» of concrete historical knowledge from the conceptual complexes of the "classical" philosophy of history and from metaphysical grounds took place. In the 20-th century there was a rejection of the main ideas of modern philosophy regarding the philosophy of history: the idea of a rational world order, the idea of the progressive development of mankind, the idea of transcendental power responsible for what is happening in history, etc. Anthropologists, sociologists, historians, ethnographers played important role in the process of «emancipation» of concrete historical knowledge.
However, many questions did not receive any answer: «What is history?», «What is the historical meaning (and is there any at all)?», «What are the problems of interpretation of history and how can they be overcome?», «What are the general and special features of different types of history?».
One of the ways of contemporary understanding of history is to coordinate the schematism of historical knowledge and the structure of historical being. According to the type of co-presence described in the event communication, three schematic dimensions of historical reality are possible: spatial, situational, temporal.
The spatial schematic is presented in M. Foucault's «Words and Things». According to it, the historical is found there, and only where the spatial structure and the description of the typical method of communication of the elements of this structure are deployed.
The situational schematic of the historical takes place where a specific (moral, political, legal) nature of the connection between historical events is realized. The most important element of the situational schematic is the generation that has received an education that has left the fruits and results of its labor. Attractive in the history described in this way is the representation of historical reality as a process: historical formations, historical types, historical characters.
The temporal schematics of the historical, exemplified by M. Heidegger's phenomenological construction in «Being and Time», is found where the temporal measure of the existence of the historical being is explicated, that is, historicity is understood as the temporality of the existence of the real, and the temporality of the historical understanding itself.
ABSTRACT. It is hard to challenge the point of view according to which our century is a century of Humanities. Undisputable evidence in favor of this point is a list of thematic sections of 14th, 15th and our 16th Congresses of Logic, Methodology and Philosophy of Science (and Technology). We mean that the list of 14th Congress did not include any section with the term “Humanities” in its title; the programme of 15th Congress included a section devoted the philosophy of the Humanities, so did the present Congress, although this section has – if it is possible to say – a “palliative” title “Philosophy of the Humanities and the Social Sciences”, And now among topic areas of the 16th Congress one can see “Philosophical Traditions, Miscellaneous”. Now there is the intricate spectrum of different approaches to the philosophical and methodological problems of the Humanities, each of which is connected with its own “philosophy”, ideology, and visions. The fact is that the attempt to form the philosophy of the Humanities along the lines of the philosophy of science is definitely – and perhaps irrevocably – failed. It is time to scrutinize this spectrum with the aim to find a certain sustainable set of terms and notions for creating a basis of philosophy (and methodology) of the Humanities.
We propose not the dictionary, not the encyclopedia (in Umberto Eco’s sense), but namely glossary, each entry of which will contain clear, straightforward definitions, practice and examples of using, from one side, and, from the other side, each entry will not be closed – it may be supplemented and advanced. The order of entries will not be alphabetical; it will rather be determined by the functional features of the terms and notions, by their relationships to each other. These relations can be historical, methodological, ontological, lexico-terminological, socially-oriented etc. The terms and notions included in the glossary give us opportunity to form a certain kind of the frame or better to say, some kind of net for further researches. The net (frame) may be expanded, by means of including new notions, terms, phrases and collocations; the frame may be deepen by means of forming new connections between “old” notions or between “old” and “new” notions and terms. For example, we include notion “text” in the glossary, this inclusion forces to include such notions as “author”, “reader”, “language”, “style”, “(outer) world”, “history”, “value”.
We suppose that the initial list of basic notions must include the following set: representation, intention, sign and sign system, code, semiosis and retrograde semiosis (as a procedure of sense analysis), sense, meaning, dialogue, translation, text (and notions connected with text), interpretation and understanding. It is easy to see that basic notions are used in different realms of the Humanities (semiotics, hermeneutics, significs, history of notions and history of ideas, theory of literature, philosophy, logic and linguistics); this fact emphasizes their basic features.
Timm Lampert (Humboldt University Berlin, Germany)
Theory of Formalization: The Tractarian View
ABSTRACT. Logical formalization is an established practice in philosophy as well as in mathematics. However, the rules of this practice are far from clear.
Sainsbury(1991) and Epstein(1994) were among the first to specify criteria of formalization. Since Brun's detailed monograph (Brun(2004)) criteria and theories of formalization have been discussed intensively (cf., e.g., most recently Peregrin(2017)). No single theory of formalization has emerged from this discussion. Instead, it more and more becomes clear that different theories of formalization involve different traditions, background theories, aims, basic conceptions and foundations of logic. Brun(2004) envisages a systematic and, ideally, automated procedure of formalizing ordinary language as the ultimate aim of logical formalization. Peregrin(2017) ground their theory on inferentialism. Like Brun, they try to combine logical expressivism with a modest normative account of logic by their theory of reflective equilibrium. In contrast, Epstein(1994) bases his theory of formalization on semantic and ontological foundations that are rather close to mathematical model theory. Sainsbury(1991) grounds the project of formalization within the philosophical tradition of identifying logical forms in terms of representing truth conditions. He identifies Davidson as the most elaborated advocate of this tradition. Davidson refers to Tarskian semantics and distinguishes logical formalization from semantic analysis. Sainsbury also assigns the Tractarian View of the early Wittgenstein to the project of identifying truth conditions of ordinary propositions by means of logical formalizations. In contrast to Davidson, however, Wittgenstein does not distinguish the project of formalization from a semantic analysis and he does not rely on Tarski semantics. Instead, Wittgenstein presumes semantics according to which instances of first-order formulas represent the existence and non-existence of logically independent facts.
In my talk, I will show that the Tractarian view can be spelled out in terms of a theory of formalization that provides an alternative to Davidson's
account of what it means to identify logical forms and truth conditions of ordinary propositions. In particular, I will argue that Wittgenstein with his early abnotation envisaged an account of first-order logic that makes it possible to identify logical forms by ideal symbols that serve as identity criteria for single non-redundant conditions of truth and falsehood of formalized propositions. Instead of enumerating an infinite number of possible infinitely complex models and counter-models in first-order logic, ideal symbols provide a finite description of the structure of possibly infinitely complex conditions of truth and falsehood. I will define logical forms and criteria of adequate formalization within this framework. Furthermore, I will show how to solve (i) termination problems of the application of criteria of adequate formalization, (ii) the trivialization problem of adequate formalization (iii) the problem of the uniqueness of logical form, and (iv) the problem of a mechanical and comprehensible verbalization of truth conditions. All in all, I will argue that a theory of formalization based on the Tractarian view provides a consistent and ambitious alternative that can be utilized for a systematic and partly algorithmic explanation of conditions of truth and falsehood of ordinary propositions expressible within first-order logic.
References
Brun, G.:Die richtige Formel. Philosophische Problem der logischen Formalisierung,
Ontos, Frankfurt A.M., 2004.
Epstein, R.L.: Predicate Logic. The Semantic Foundations of Logic, Oxford University Press, Oxford, 1994.
Peregrin, J. and Svoboda, V.: Reflective Equilibrium and the Principles of Logical Analysis, Routledge, New York, 2017.
Sainsbury, M.: Logical Forms, 2nd edition, Blackwell, Oxford, 2001, 1st edition 1991.
11:30
Samuel Elgin (University of California San Diego, United States)
The Semantic Foundations of Philosophical Analysis
ABSTRACT. The subject of this paper is a targeted reading of sentences of the form ‘To be F is to be G,’ which philosophers often use to express analyses, and which have occupied a central role in the discipline since its inception. Examples that naturally lend themselves to this reading include:
1. To be morally right is to maximize utility.
2. To be human is to be a rational animal.
3. To be water is to be the chemical compound H2O.
4. To be even is to be a natural number divisible by two without remainder.
5. To be a béchamel is to be a roux with milk
Sentences of this form have been employed since antiquity (as witnessed by 2). Throughout the ensuing history, proposed instances have been advanced and rejected for multitudinous reasons. On one understanding, this investigation thus has a long and rich history— perhaps as long and rich as any in philosophy. Nevertheless, explicit discussion of these sentences in their full generality is relatively recent. Recent advances in hyperintensional logic provide the necessary resources to analyze these sentences perspicuously—to provide an analysis of analysis.
A bit loosely, I claim that these sentences are true just in case that which makes it the case that something is F also makes it the case that it is G and vice versa. There is a great deal to say about what I mean by ‘makes it the case that.’ In some ways, this paper can be read as an explication of that phrase. Rather than understanding it modally (along the lines of ‘To be F is to be G’ is true just in case the fact that something is F necessitates that it is G and vice versa), I employ truth-maker semantics: an approach that identifies the meanings of sentences with the finely-grained states of the world exactly responsible for their truth-values.
This paper is structured as follows. I articulate the targeted reading of `To be F is to be G’ I address, before discussing developments in truth-maker semantics. I then provide the details of my account and demonstrate that it has the logical and modal features that it ought to. It is transitive, reflexive and symmetric, and has the resources to distinguish between the meanings of predicates with necessarily identical extensions (sentences of the form ‘To be F is to be both F and G or not G’ are typically false); further, if a sentence of the form ‘To be F is to be G’ is true then it is necessarily true, and necessary that all and only Fs are Gs. I integrate this account with the λ-calculus—the predominant method of formalizing logically complex predicates—and argue that analysis is preserved through β-conversion. I provide two methods for expanding this account to address analyses employing proper names, and conclude by defining an irreflexive and asymmetric notion of analysis in terms of the reflexive and symmetric notion.
ABSTRACT. Although Łukasiewicz was the first proponent of Husserl’s anti-psychologism in the Lvov-Warsaw School, his later concept of anti-psychologism has some features that are incompatible with Husserl's concept. In his famous book, Husserl declared anti-psychologism as the view that laws of logic are not laws of psychology and consequently logic is not a part of psychology. The distinction between logic and psychology is based on the difference between axiomatic and empirical sciences. Logic is an axiomatic science and its laws are settled but psychology is an empirical science and its laws derive from experience. The laws of logic are settled in an ideal world and as such are independent of experience and apodictic.
Łukasiewicz supported Husserl’s views in a short paper “Teza Husserla o stosunku logiki do psychologii” that appeared in 1904 and later also in his famous paper “Logic and Psychology” which was published in 1910. At the same time, Łukasiewicz, however, started to question the fact that the laws of logic are settled, which was an essential part of anti-psychologism for Husserl. Łukasiewicz questioned the law of contradiction in his book On Aristotle’s Law of Contradiction and finished his denial of the unchangeability of laws of logic by an introduction of his systems of many-valued logic. His first system, the three-valued logic, is clearly based on the denial of the law of bivalence.
In his later works, Łukasiewicz questioned also the distinction between axiomatic and empirical sciences and the truthfulness of apodictic statements, which was another important component of Husserl’s anti-psychologistic argumentation. Nonetheless, he still claimed that psychologism is undesirable in logic in his later works. As he did not hold certain features of anti-psychologism that were essential for the concept of Husserl that he adopted at first, it seems that his own concept of anti-psychologism differed. Frege is the most prominent representative of anti-psychologism of that time, who was also appreciated by Łukasiewicz. It seems, however, that Łukasiewicz was also inspired by other logicians from the history of logic as Aristotle and certain medieval logicians. The aim of my talk is to provide the definition of Łukasiewicz’s anti-psychologism.
References:
Husserl E (2009) Logická zkoumání I: Prolegomena k čisté logice. Montagová KS, Karfík F (trans.) OIKOIMENH, Prague.
Łukasiewicz JL (1904) Teza Husserla o stosunku logiki do psychologii. Przegląd Filozoficzny 7: 476–477.
Łukasiewicz JL (1910) Logika a psychologia. Przegląd Filozoficzny 10: 489–491.
Łukasiewicz JL (1957) Aristotle’s Syllogistic: From the Standpoint of Modern Formal Logic. 2nd edition. Clarendon Press, Oxford.
Łukasiewicz JL (1961) O determinizmie. In: Łukasiewicz JL., Z zagadnień logiki i filozofii: Pisma wybrane. Słupecki J (ed.), Państwowe wydawnictwo naukowe. Warsaw, 114–126.
Łukasiewicz JL (1987) O zasadzie sprzeczności u Arystotelesa, Studium krytyczne. Woleński J (ed.), Państwowe wydawnictwo naukowe, Varšava.
Woleński J (1988) Wstep. In: Łukasiewicz JL., Sylogistyka Arystotelesa z punktu widzenia współczesnej logiki formalnej. Państwowe Wydawnictwo Naukowe, Warsaw, IX–XXIII.
Surma P (2012) Poglądy filozoficzne Jana Łukasiewicza a logiki wielowartościowe. Semper, Warsaw.
Organizers: Carolin Antos, Deborah Kant and Deniz Sarikaya
Text is a crucial medium to transfer mathematical ideas, agendas and results among the scientific community and in educational context. This makes the focus on mathematical texts
a natural and important part of the philosophical study of mathematics. Moreover, it opens up the possibility to apply a huge corpus of knowledge available from the study of texts in other disciplines to problems in the philosophy of mathematics.
This symposium aims to bring together and build bridges between researchers from different methodological backgrounds to tackle questions concerning the philosophy of mathematics. This includes approaches from philosophical analysis, linguistics (e.g., corpus studies) and literature studies, but also methods from computer science (e.g., big data approaches and natural language processing), artificial intelligence, cognitive sciences and mathematics education. (cf. Fisseni et al. to appear; Giaquinto 2007; Mancosu et al. 2005; Schlimm 2008; Pease et al. 2013).
The right understanding of mathematical texts might also become crucial due to the fast successes in natural language processing on one side and automated theorem proving on the other side. Mathematics as a technical jargon or as natural language, which quite reach structure, and semantic labeling (via LaTeX) is from the other perspective an important test-case for practical and theoretical study of language.
Hereby we understand text in a broad sense, including informal communication, textbooks and research articles.
ABSTRACT. We will debate how different layers of interpretation of a mathematical text are useful at different stages of analysis and in different contexts. To achieve this goal we will rely on tools from formal linguistics and artificial intelligence which, among other things, allow to make explicit in the formal representation information that is implicit in the textual form. In this way, we wish to contribute to an understanding of the relationship between the formalist and the textualist position in the investigation of mathematical proofs.
Proofs are generally communicated in texts (as strings of symbols) and are modelled logically as deductions, e.g. a sequence of first order formulas fulfilling specified syntactical rules. We propose to bridge the gap between these two representations by combining two methods: First, Proof Representation Structures (PRSs), which are an extension of Discourse Representation Structures (see Geurts, Beaver, & Maier, 2016). Secondly, frames as developed in Artificial Intelligence and linguistics.
PRSs (Cramer, 2013) were designed in the Naproche project to formally represent the structure and meaning of mathematical proof texts, capturing typical structural building blocks like definitions, lemmas, theorems and proofs, but also the hierarchical relations between propositions in a proof. PRSs distinguish proof steps, whose logical validity needs to be checked, from sentences with other functions, e.g. definitions, assumptions and notational comments. On the (syntacto-)semantic level, PRSs extend the dynamic quantification of DRSs to more complex symbolic expressions; they also represent how definitions introduce new symbols and expressions.
Minsky (1974) introduces frames as a general “data-structure for representing a stereotyped situation”. ‘Situation’ should not be understood too narrowly, as frames can be used to model concepts in the widest sense. The FrameNet project prominently applies frames to represent the semantics of verbs. For example, “John sold his car. The price was € 200.” is interpreted as meaning that the second sentence anaphorically refers to the `price` slot of `sell`, which is not explicitly mentioned in the first sentence.
In the context of mathematical texts, we use frames to model what is expected of proofs in general and specific types of proofs. In this talk, we will focus on frames for inductive proofs and their interaction with other frames. An example of the interaction of different proof frames is the dependence of the form of an induction on the underlying inductive type, so that different features of the type (the base element and the recursive construction[s]) constitute natural candidates for the elements of the induction (base case and induction steps).
The talk will show how to relate the two levels (PRSs and frames), and will sketch how getting from the text to a fully formal representation (and back) is facilitated by using both levels.
Cramer, M. (2013). Proof-checking mathematical texts in controlled natural language (PhD thesis). Rheinische Friedrich-Wilhelms-Universität Bonn.
Geurts, B., Beaver, D. I., & Maier, E. (2016). Discourse Representation Theory. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2016). ; Metaphysics Research Lab, Stanford University.
Minsky, M. (1974). A Framework for Representing Knowledge. Cambridge, MA, USA: Massachusetts Institute of Technology.
14:30
Bernhard Fisseni (Leibniz-Institut für Deutsche Sprache, Universität Duisburg-Essen, Germany)
Perspectives on Proofs
ABSTRACT. In this talk, we want to illustrate how to apply a general concept of perspectives to mathematical proofs, considering the dichotomy of formal proofs and textual presentation as two perspectives on the same proof.
We take *perspective* to be a very general notion that applies to spatial representation, but also phenomena in natural language syntax known as *perspectivation* and related to diathesis (grammatical voice) or semantically partially overlapping verbs such as *sell*, *buy*, *trade*; to phenomena in natural language semantics (e.g., prototype effects) and in narrative texts (Schmid, 2010 distinguishes perspective of characters or narrators in six dimensions, from perception to language). In most applications of the concept of perspective, a central question is how to construct a superordinate ‘meta’perspective that accommodates given perspectives, while maintaining complementary information.
Perspectival phenomena intuitively have in common that different perspectives share some information and are partially ‘intertranslatable’ or can be seen as projections from a more complete and more fine-grained metaperspective to less informative or more coarse perspectives.
In our approach, modelling is done bottom-up starting from specific instances. We advocate a formal framework for the representation of perspectives as frames and using feature structures, a data structure well known in linguistics. With feature structures, it becomes easier to model the interaction of frames and approach compositionality, and connects to formal models of (unification-based) linguistic grammar like Berkeley Construction Grammar (cf., e.g., Boas & Sag, 2012), but also recent work on frame semantics (see, e.g., Gamerschlag, Gerland, Osswald, & Petersen, 2015). Metaperspectives are constructed using decomposition of features and types into finer structures (see Fisseni, forthcoming), organized in the inheritance hierarchies typical of feature structure models (see, e.g., Carpenter, 1992; Pollard & Sag, 1994). Using this formal model of perspectives, it can be shown that occasionally, e.g. in the case of metaphors, *partial* perspectives are used, i.e. that perspectives contain semantic material that is to be disregarded, for instance splitting notions semantic verb classes into different properties like *involving an agent* or *most prominent participant*.
Similar to syntactic perspectivation (active – passive, *buy* – *sell*), where the same event can be conceptualized differently (e.g., as an action or as a process) mathematical texts and formal proofs can be seen as describing ‘the same proof’ as a process and a state of affairs, respectively. The talk will show how to elaborate this analogy, and will discuss the construction of a metaperspective, i.e. merging both perspectives in a way that their common core will be distilled.
References
----------
Boas, H. C., & Sag, I. A. (Eds.). (2012). *Sign-based construction grammar*. Stanford: CSLI.
Carpenter, B. (1992). *The logic of typed feature structures*. Cambridge University Press.
Fisseni, B. (forthcoming). Zwischen Perspektiven. In *Akten des 52. Linguistischen Kolloquiums, Erlangen*.
Gamerschlag, T., Gerland, D., Osswald, R., & Petersen, W. (Eds.). (2015). *Meaning, frames, and conceptual representation. Studies in language and cognition*. Düsseldorf: Düsseldorf University Press.
Pollard, C., & Sag, I. (1994). *Head driven phrase structure grammar*. University of Chicago Press.
Schmid, W. (2010). *Narratology. An introduction.* Berlin: de Gruyter.
Constructive deliberation: pooling and stretching modalities
ABSTRACT. When a group of agents deliberates about a course of action or decision, each of the individual agents has distinct (soft or hard) constraints on what counts as a feasible alternative, evidence about potential alternatives, and higher-order evidence about the other agents’ views and constraints. Such information may be to some extent shared, but it may also be conflicting, either at the level of a single individual, or among the agents. In one way or another, sharing and combining this information should allow the group to determine which set of alternatives constitutes the decision problem faced by the collective. We call this process constructive deliberation, and contrast it with the selective deliberation that takes place when a set of alternatives has been fixed and the group is supposed to select one of them by means of some decision method such as voting. Whereas selective deliberation has been investigated at length (in social choice theory and game theory), constructive deliberation has received much less attention, and there is hardly any formal account of it on the market. In the first part of our talk, we will investigate this distinction, and discuss the similarities and differences between both processes as they bear on formal modeling and considerations of rationality and equality.
In the second part, we will focus on the static aspect of constructive deliberation and on the role of constraints. We will hence ask how the output, a set of viable alternatives constituting a collective decision problem can be obtained from a given input: a tuple of sets of constraints, one for each agent. We model this input in terms of a neighborhood semantics, and show how the output can be obtained by suitable combinations of two types of operations on neighborhoods: pooling (also known as aggregation or pointwise intersection) and stretching (also known as weakening or closure under supersets). We provide a sound and complete logic that can express the result of various such combinations and investigate its expressive power, building on earlier results by Van De Putte and Klein (2018). If time permits, we will also connect this work to the logic of evidence-based belief (van Benthem & Pacuit, 2011; Özgün et al., 2016) and the logic of coalitional ability (Pauly 2002).
References:
Baltag, A., Bezhanishvili, N., Özgün, A., & Smets, S. J. L. Justified Belief and the Topology of Evidence. In J. Väänänen, Å. Hirvonen, & R. de Queiroz (Eds.), Logic, Language, Information, and Computation: 23rd International Workshop, WoLLIC 2016: Puebla, Mexico, August 16–19th, 2016: proceedings (pp. 83-103).
Pauly, M., A modal logic for coalitional power in games, Journal of Logic and Computation 1 (2002), pp. 149-166.
van Benthem, J. and E. Pacuit, Dynamic logics of evidence-based beliefs, Studia Logica 99 (2011), pp. 61-92.
Van De Putte, F. and Klein, D. Pointwise intersection in neighbourhood modal logic. In Bezhanishvili, Guram and D'Agostino, Giovanna (eds.), Advances in Modal Logic (AiML 12), College Publications (2018), pp. 591-610.
14:30
Fengkui Ju (School of Philosophy, Beijing Normal University, China)
Coalitional Logic on Non-interfering Actions
ABSTRACT. Suppose that there is a group of agents, and they perform actions and the world will change. Assume that they change different parts of the world and these parts do not overlap. Under this assumption, their actions do not interfere with each other. Then the class of possible outcomes of a joint action is the intersection of the classes of possible outcomes of those individual actions in that joint action. This property can be called \emph{the intersection property}.
A special case of the previous assumption is that every agent controls a set of atomic propositions and these sets are disjoint. This is a basic setting in \cite{HoekWooldridge05}.
In Coalition Logic ($\mathsf{CL}$), proposed by \cite{Pauly02}, the class of possible outcomes of an individual action consists of the possible outcomes of the joint actions that are extensions of that individual action and possible outcomes of joint actions are arbitrary. As a result, the intersection property is not met.
The STIT logic proposed by \cite{Horty01} has the intersection property. However, it requires that the classes of possible outcomes of the individual actions of the same agent are disjoint. This constraint is too strong.
This work presents a complete coalitional logic $\mathsf{NiCL}$ on non-interfering actions.
References
[1] J. Horty. Agency and Deontic Logic. Oxford University Press, 2001.
[2] M. Pauly. A modal logic for coalitional power in games. Journal of Logic and Computation, 12(1):149-166, 2002.
[3] W. van der Hoek and M. Wooldridge. On the logic of cooperation and propositional control. Articial Intelligence, 164(1):81-119, 2005.
Popper and Modern Cosmology: His Views and His Influence
ABSTRACT. Karl Popper only commented on modern cosmology at a few occasions and then in general terms. His only paper on the subject dates from 1940. Nonetheless, his philosophy of science played a most important role in the epic cosmological controversy that raged from 1948 to about 1965 and in which the new steady-state theory confronted the evolutionary cosmology based on Einstein’s general theory of relativity. The impact of Popper’s philosophical views and of his demarcation criterion in particular is still highly visible in the current debate concerning the so-called multiverse hypothesis. In astronomy and cosmology, as in the physical sciences generally, Popper’s views of science – or what scientists take to be his views – have had much greater impact than the ideas of other philosophers. The paper analyses the interaction between Popper’s philosophy of science and developments in physical cosmology in the post-World War II era. There are two separate aspects of the analysis. One is to elucidate how Popper’s philosophical ideas have influenced scientists’ conceptions of the universe as a whole. The other aspect is to investigate Popper’s own views about scientific cosmology, a subject he never dealt with at any length in his publications. These views, as pieced together from published as well as unpublished sources, changed somewhat over time. While he had some sympathy for the now defunct steady-state theory, he never endorsed it in public and there were elements in it which he criticized. He much disliked the big bang theory, which since the mid-1960s has been the generally accepted framework for cosmology. According to Popper, the concept of a big bang as the beginning of the universe did not belong to science proper. Generally he seems to have considered cosmology a somewhat immature science.
ABSTRACT. As Helge Kragh notes, Karl Popper never engaged scientific cosmology to the degree that he did with other physical sciences, but his dislike of the big bang theory can be shown to be consistent with his own philosophical views. We are, therefore, in a position to examine contemporary cosmological theories through a Popperian methodology, and a criticism of the current cosmological paradigm can be based on his falsificationist ideas.
One of Kragh’s insights is that Popper’s demarcation criterion has had a lasting impact on the development of scientific cosmology. In this paper I will defend the view that a methodological analysis of contemporary cosmological models that is in line with Popper’s demarcation criterion between scientific and non-scientific cosmology can greatly benefit from the use of formal methods. For example, formal methodological criteria can help us answer the question of whether physical cosmology should be considered, as Popper did, to be an immature science. The application of these formal criteria will reveal that there are two contrasting approaches in cosmology, one of which is is compatible, and the other one incompatible with Popper’s methodological views.
In practical terms, the difference between these approaches is that in the former the focus is on studying small scale phenomena (e.g. galaxies, clusters), and trying to build models that are successful at making novel predictions at these scales. In the latter approach the primary attempt is to form a model of the universe as a whole and then work our way toward smaller scales.
Both of these approaches face difficulties with explaining some of the available data, and disagreements between their proponents have lead to a surging interest in the foundational methodological questions among cosmologists themselves.
CANCELLED: Teaching Conceptual Change: Can Building Models Explain Conceptual Change in Science?
ABSTRACT. This paper considers how novel scientific concepts (concepts which undergo a radical conceptual change) relate to their models. I present and discuss two issues raised by respectively Chin and Samarapungavan (2007) and Nersessian (1989) about perceived (and persistent) difficulties in explaining conceptual change to students. In both cases models are either seen as secondary to concepts/conceptual change or seen as inessential for explanation. Next, I provide an example which to some extent counters these views. On the basis of that example I suggest an alternative view of the role of models in conceptual change and show that the latter could have beneficial implications for teaching conceptual change. The example in question is Robert Geroch’s modeling of Minkowski spacetime in Relativity from A to B (1981).
It seems reasonable to think that understanding the conceptual transformation from space and time to spacetime first, makes it easier to build a model of spacetime. This is the underlying assumption that Chin and Samarapungavan make (2007). Their objective is to find ways to facilitate conceptual change because they see the lack of understanding of the conceptual change that produced the concept as the main obstacle for students’ ability to build a model of it. I argue that this is not necessarily the case: in certain cases (spacetime for example) building the model can facilitate understanding of the conceptual change.
In a similar vein, although understanding how scientific concepts developed can often give clues for how to teach them I argue that in some cases the historical approach is counterproductive. Nersessian argues that the same kind of reasoning used in scientific discovery could be employed in science education (Nersessian, 1989). I essentially agree with this view but with a caveat. I argue that in some cases the historical approach might be constraining and in particular that the spacetime example shows that ignoring the historical path in certain cases is more successful.
Additionally Geroch’s way to model spacetime can be of consequence for teaching relativity and quantum mechanics to high school students. Physics is traditionally taught through solving equations and performing experiments which is ill suited for relativity and quantum mechanics. Norwegian curriculum requirements include that students be able to give qualitative explanations as well as discuss philosophical and epistemological aspects of physics. According to ReleQuant (University of Oslo and the NTNU project on developing alternative learning resources for teaching relativity and quantum mechanics to high school students) this opens the door for introducing qualitative methods in teaching high school physics. The conclusion that ReleQuant draws from this is that historical approaches may be profitable when teaching quantum physics on the high school level.
The historical approach might not always be effective – as it is not in teaching spacetime. Teaching through building a model “from scratch” might work better. Building a model from with no or little reference to theory could be viewed as a qualitative method and would essentially be in agreement with the overall ambition of the ReleQuant project.
References
Bungum, Berit, Ellen K. Henriksen, Carl Angell, Catherine W. Tellefsen, and Maria V. Bøe. 2015. “ReleQuant – Improving teaching and learning in quantum physics through educational design research”. Nordina: Nordic studies in science education 11(2): 153 - 168
Chin, C. and Samarapungavan, A. 2007. “Inquiry: Learning to Use Data, Models and Explanations”. In Teaching Scientific Inquiry (eds) Richard Duschl and Richard Grandy, 191 – 225. Sense Publishers
Geroch, R. 1981. General Relativity from A to B. Chicago: Chicago University Press
Nersessian, N. 1989. “Conceptual change in science and in science education”. Synthese 80: 163-183
Julia Bursten (University of Kentucky, United States)
Scale Separation, Scale Dependence, and Multiscale Modeling in the Physical Sciences
ABSTRACT. In multi-scale modeling of physical systems, dynamical models of higher-scale and lower-scale behavior are developed independently and stitched together with connective or coupling algorithms, sometimes referred to as “handshakes.” This can only be accomplished by first separating modeled behaviors into bulk behaviors and surface or interfacial behaviors. This strategy is known as “scale separation,” and it requires physical behaviors at multiple length, time, or energy scales to be treated as autonomous from one another.
In this talk, I examine what makes this strategy effective—and what happens when it breaks down. The nanoscale poses challenges to scale separation: there, the physics of the bulk occurs at the same length scale as the physics of the surface. Common scale-separation techniques, e.g. modeling surfaces as boundary conditions, fail. Modeling the scale-dependent physics of nanoscale materials presents a new challenge whose solution requires conceptual engineering and new modeling infrastructure. These considerations suggest a view of physical modeling that is centered not around idealization or representation but around scale.
Ravit Dotan (University of California, Berkeley, United States)
Machine learning, theory choice, and non-epistemic values
ABSTRACT. I argue that non-epistemic values are essential to theory choice, using a theorem from machine learning theory called the No Free Lunch theorem (NFL).
Much of the current discussion about the influence of non-epistemic values on empirical reasoning is concerned with illustrating how it happens in practice. Often, the examples used to illustrate the claims are drawn from politically loaded or practical areas of science, such as social science, biology, and environmental studies. This leaves advocates of the claim that non-epistemic values are essential to assessments of hypotheses vulnerable to two objections. First, if non-epistemic factors happen to influence science only in specific cases, perhaps this only shows that scientists are sometimes imperfect; it doesn’t seem to show that non-epistemic values are essential to science itself. Second, if the specific cases involve sciences with obvious practical or political implications such as social science or environmental studies, then one might object that non-epistemic values are only significant in practical or politically loaded areas and are irrelevant in more theoretical areas.
To the extent that machine learning is an attempt to formalize inductive reasoning, results from machine learning are general. They apply to all areas of science, and, beyond that, to all areas of inductive reasoning. The NFL is an impossibility theorem that applies to all learning algorithms. I argue that it supports the view that all principled ways to conduct theory choice involve non-epistemic values. If my argument holds, then it helps to defend the view that non-epistemic values are essential to inductive reasoning from the objections mentioned in the previous paragraph. That is, my argument is meant to show that the influence of non-epistemic values on assessment of hypotheses is: (a) not (solely) due to psychological inclinations of human reasoners; and (b) not special to practical or politically loaded areas of research, but rather is a general and essential characteristic for all empirical disciplines and all areas of inductive reasoning.
In broad strokes, my argument is as follow. I understand epistemic virtues to be theoretical characteristics that are valued because they promote epistemic goals (for this reason, the epistemic virtues are sometimes just called “epistemic values”). For example, if simpler theories are more likely to satisfy our epistemic goals, then simplicity is epistemically valuable and is an epistemic virtue. I focus on one aspect of evaluation of hypotheses – accuracy, and I interpret accuracy as average expected error. I argue that NFL shows that all hypotheses have the same average expected error if we are unwilling to make choices based on non-epistemic values. Therefore, if our epistemic goal is promoting accuracy in this sense, there are no epistemic virtues. Epistemic virtues promote our epistemic goals, but if we are not willing to make non-epistemic choices all hypotheses are equally accurate. In other words, no theoretical characteristic is such that hypotheses which have it satisfy our epistemic goal better. Therefore, any ranking of hypotheses will depend on non-epistemic virtues.
Taking a machine at its word: Applying epistemology of testimony to the evaluation of claims by artificial speakers
ABSTRACT. Despite the central role technology plays in the production, mediation, and communication of information, formal epistemology regarding the influence of emerging technologies on our acquisition of knowledge and justification of beliefs is sparse (Miller & Record 2013), and with only a couple exceptions (Humphreys 2009; Tollefsen 2009) there has been almost no attempt to directly apply epistemology of testimony to analyze artifacts-as-speakers (Carter & Nickel 2014). This lacuna needs to be filled. Epistemology of testimony is concerned with identifying the conditions under which a hearer may be justified in trusting and forming beliefs based on a speaker claims. Similarly, philosophers of technology and computer scientists alike are urgently pushing to ensure that new technologies are sufficiently explainable and intelligible to appropriately ground user understanding and trust (Tomsett et al. 2018; Weller 2017). Given the convergent goals of epistemologists and philosophers of technology, the application of epistemology of testimony to the evaluation of artifact speakers may be incredibly productive. However, we must first determine whether an artifact may legitimately hold the role of ‘speaker’ in a testimonial relationship. Most epistemologist assume that testimonial speakers are intentional, autonomous agents, and methods for evaluating the testimonial claims of such agents have developed accordingly making technology difficult to slot into the conversation.
In this paper I demonstrate that epistemology of testimony may be applied to analyze the production and transmission of knowledge by artificial sources. Drawing on Gelfert (2014) I first argue, independently of my goal to apply testimony to technology, that our current philosophical conception of testimony is ill-defined. I then differentiate between the theoretical and pragmatic aims of epistemology of testimony and argue that the pragmatic aim of epistemology of testimony is to provide tools for the evaluation of speaker claims. I explicate a more precise 'continuum view' of testimony that serves this pragmatic aim, and conclude by describing how the explicated continuum view may be usefully and appropriately applied to the evaluation of testimony from artificial speakers.
Bibliography:
Carter, A. J., & Nickel, P. J. (2014). On testimony and transmission. Episteme, 11(2), 145-155. doi:10.1017/epi.2014.4
Gelfert, A. (2014). A Critical Introduction to Testimony. London: Bloomsbury.
Humphreys, P. (2009). Network Epistemology. Episteme, 6(2), 221-229.
Miller, B., & Record, I. (2013). Justified belief in a digital age: On the epistemic implication of secret internet technologies. Episteme, 10(2), 117-134. doi:10.1017/epi.2013.11
Tollefsen, D. P. (2009). Wikipedia and the Epistemology of Testimony. Episteme, 6(2), 8-24.
Tomsett, R., Braines, D., Harborne, D., Preece, A., & Chakraborty, S. (2018). Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. Paper presented at the 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018).
Weller, A. (2017). Challenges for transparency. Paper presented at the ICML Workshop on Human Interpretability in Machine Learning, Syndey, NSW, Australia.
Holger Andreas (The University of British Columbia, Canada)
Explanatory Conditionals
ABSTRACT. The present paper aims to complement causal model approaches to causal explanation by Woodward (2003), Halpern and Pearl (2005), and Strevens (2004). It does so by carrying on a conditional analysis of the word ‘because’ in natural language by Andreas and Günther (2018). This analysis centres on a strengthened Ramsey Test of conditionals:
α ≫ γ iff, after suspending judgment about α and γ, an agent can infer γ from the supposition of α (in the context of further beliefs in the background).
Using this conditional, we can give a logical analysis of because:
Because α,γ (relative to K) iff α≫γ ∈ K and α,γ ∈ K
where K designates the belief set of the agent. In what follows, we shall refine this analysis by further conditions so as to yield a fully-fledged analysis of (deterministic) causal explanations. The logical foundations of the belief changes that define the conditional ≫ are explicated using AGM-style belief revision theory.
Why do we think that causal model approaches to causal explanation are incomplete? Halpern and Pearl (2005) have devised a precise semantics of causal models that centres on structural equations. Such an equation represents causal dependencies between variables in a causal model. In the corresponding definition of causation, however, there is no explanation of what it is for a variable to causally depend directly on certain other variables. This approach merely defines complex causal relations in terms of elementary causal dependencies, just as truth-conditional semantics defines the semantic values of complex sentences in terms of a truth-value assignment to the atomic formulas. And the corresponding account of causal explanation in Halpern and Pearl (2005) inherits the reliance on elementary causal dependencies (which are assumed to be antecedently given) from the analysis of causation. Woodward (2003) explains the notion of a direct cause in terms of interventions, but the notion of an intervention is always relative to a causal graph so that some knowledge about elementary causal dependencies must be antecedently given as well.
The kairetic account of explanation by Strevens (2004) makes essential use of causal models as well, but works with a more liberal notion of such a model. In his account, a set of propositions entail an explanandum E in a causal model only if this entailment corresponds to a “real causal process by which E is causally produced” (2004, p. 165). But the kairetic account is conceptually incomplete in a manner akin to the approaches by Halpern and Pearl (2005) and Woodward (2003). For, it leaves open what the distinctive properties of causal relations of logical entailment are. In what follows, we aim to give a precise characterization of logical entailment with a causal meaning. For this characterization, we define an explanatory conditional ≫, but impose also non-logical conditions on the explanans and the explanandum. Here is a semiformal exposition of our final analysis:
Definition 1. Causal Explanation. Let S be an epistemic state that is represented by a prioritised belief base. K(S) is the set of beliefs of S, extended by the strengthened Ramsey Test. The set A of antecedent conditions and the set G of generalisations explain the fact F - for an epistemic state S - iff
(E1) For all α ∈ A, all γ ∈ G, and all β ∈ F: α,γ,β ∈ K(S).
(E2) For all non-empty A′ ⊆ A, A′≫F ∈ K(S).
(E3) For any α ∈ A and any β ∈ F, (i) the event designated by α temporally precedes the event designated by β, or (ii) the concepts of α are higher up in the hierarchy of theoreticity of S than the concepts of β.
(E4) For any γ ∈ G, γ is non-redundant in the set of all generalisations of S.
Mustafa Yavuz (Istanbul Medeniyet University, History of Science Department, Turkey)
Definition and Faculties of Life in Medieval Islamic Philosophy
ABSTRACT. Abstract:
It has always been problematic to give a concrete definition and identify a principle, when it comes to life, however, unlike the death. Although life itself is the main difference between biological organisms and lifeless (or inanimate?) things. At first sight, it is visible that between the eighteenth and the twentieth centuries there are forty-three books dedicated in discussions on the origin or definition of life, with a quarter of which has been published in this millennium. The increase in these numbers indicate that the debate on solving life puzzle has been popular among scientists and philosophers. How was this situation in medieval times? Did philosophers and physicians in the medieval Islamic world compose books, which give a definition of life and its faculties?
In this study, after giving few definitions of life (and that of death) from recent scientific literature, I will try to go back into the medieval in order to investigate how life and its faculties were considered in a book of kalam from fifteenth century. Composed by Sayyid al Sharif al Jurjani (d. 1413) who was appreciated as an authority in the Ottoman Empire, Sharh al Mawaqif had been copied; read, commented frequently, which shows its popularity among Ottoman philosophers, and theologians. This book is an explanation of al-Mawaqif fi Ilm al-Kalam, written by Adud al-Din al-Idji (d. 1355). I will also pursue by citations from Ibn Sina (d. 1037) -known as Avicenna- through his famous book al-Qanun fi al-Tibb (Canon of Medicine) where he discussed the Vegetal type and Animal type of life through the performance of some actions. As the eminent symbol of Peripatetic Tradition in Islamic Philosophy and Medicine, I will try to compare different considerations on life and death by Kalamic and Philosophic Schools.
Starting from biology, shifting towards kalam and philosophy, I will try to show whether or not we can find philosophical instruments which may inspire us to solve the life puzzle, today.
References:
Al-Dinnawi, M. A, 1999. Ibn Sina al-Qanun fi al-Tibb (Arabic Edition). Dar al-Kotob al-Ilmiyah. Beirut.
Bara, I. 2007. What is Life? or α + β + ω = ∞. Oltenia. Studii i comunicari. Ştiinţele Naturii. Tom XXIII. 233-238.
Cürcânî, S. Ş. 2015. Şerhu'l-Mevâkıf (Arabic Text Editetd and Turkish Translation by Ömer Türker). İstanbul: Yazma Eserler Kurumu.
Dupré, J. 2012. Processes of Life: Essays in the Philosophy of Biology. Oxford: Oxford University Press.
Luisi P. L. 2006. The Emergence of Life: From Chemical Origins to Synthetic Biology. Cambridge: Cambridge University Press.
McGinnis, J. and Reisman, D. C. 2004. Interpreting Avicenna: Science and Philosophy in Medieval Islam. Brill, Leiden.
Nicholson, D. J. and Dupré, J. 2018. Everything Flows: Towards a Processual Philosophy of Biology. Oxford: Oxford University Press.
Popa, R. 2004. Between Necessity and Probability: Searching for the Definition and Origin of Life. Berlin: Springer-Verlag.
Pross, A. 2012. What is Life? How Chemistry becomes Biology. Oxford: Oxford University Press.
14:30
Daniel Nicholson (Konrad Lorenz Institute for Evolution and Cognition Research, Austria)
CANCELLED: Schrödinger's 'What Is Life?' 75 Years On
ABSTRACT. 2019 marks 75 years since Erwin Schrödinger, one of the most celebrated physicists of the twentieth century, turned his attention to biology and published a little book titled 'What Is Life?'. Much has been written on the book’s instrumental role in marshalling an entire generation of physicists as well as biologists to enter the new field that came to be known as 'molecular biology'. Indeed, many founding figures of molecular biology have acknowledged their debt to it. Scientifically, the importance of 'What Is Life?' is generally taken to lie in having introduced the idea that the hereditary material (at the time it hadn't yet been conclusively identified as DNA) contains a 'code-script' that specifies the information necessary for the developmental construction of an organism. Although Schrödinger ascribed too much agency to this code-script, as he assumed that it directly determines the organism’s phenotype, his insight that the genetic material contains a code that specifies the primary structure of the molecules responsible for most cellular functions has proven to be essentially correct. Similarly, Schrodinger's famous account of how organisms conform to the second law of thermodynamics, by feeding on 'negative entropy' at the expense of increasing the entropy of their surroundings, is also quite correct (even if this idea was already well-known at the time). Consequently, most retrospective evaluations of 'What Is Life?' (including the ones which have just appeared to commemorate its 75th anniversary) converge in praising the book for having exerted a highly positive influence on the development of molecular biology. In this paper I challenge this widely accepted interpretation by carefully dissecting the argument that Schrödinger sets out in 'What Is Life?', which concerns the nature of biological order. Schrödinger clearly demarcates the kind of order found in the physical world, which is based on the statistical averaging of vast numbers of stochastically-acting molecules that collectively display regular, law-like patterns of behaviour, from the kind of order found in the living world, which has its basis in the chemical structure of a single molecule, the self-replicating chromosome, which he conceived as a solid-state 'aperiodic crystal' in order to account for its remarkable stability in the face of stochastic perturbations. Schrödinger referred to the former, physical kind of order as 'order-from-disorder' and the latter, biological kind of order as 'order-from-order'. As I will argue, this demarcation proved disastrous for molecular biology, for it granted molecular biologists the licence for over half a century to legitimately disregard the impact of stochasticity at the molecular scale (despite being inevitable from a physical point of view), encouraging them instead to develop a highly idealized, deterministic view of the molecular mechanisms underlying the cell, which are still today often misleadingly characterized as fixed, solid-state 'circuits'. It has taken molecular biologists a disturbingly long time to 'unlearn' Schrödinger’s lessons regarding biological order and to start taking seriously the role of self-organization and stochasticity (or 'noise'), and this, I claim, should be considered the real scientific legacy of 'What Is Life?' 75 years on.
ABSTRACT. Nickles (1973) first introduced into the philosophical literature a distinction between two types of intertheoretic reduction. The first, more familiar to philosophers, involves the tools of logic and proof theory: "A reduction is effected when the experimental laws of the secondary science (and if it has an adequate theory, its theory as well) are shown to be the logical consequences of the theoretical assumptions (inclusive of the coordinating definitions) of the primary science" (Nagel 1961, 352). The second, more familiar to physicists, involved the notion of a limit applied to a primary equation (representing a law) or theory. The result is a secondary equation or theory. The use of this notion, and the subsequent distinction between so-called "regular" and "singular" limits, has played a role in understanding the prospects for reductionism, its compatibility (or lack thereof) with emergence, the limits of explanation, and the roles of idealization in physics (Batterman 1995; Butterfield 2011).
Despite all this debate, there has been surprisingly no systematic account of what this second, limit-based type of reduction is supposed to be. This paper provides such an account. In particular, I argue for a negative and a positive thesis. The negative thesis is that, contrary to the suggestion by Nickles (1973) and the literature following him, limits are at best misleadingly conceived as syntactic operators applied to equations. Besides not meshing with mathematical practice, the obvious ways to implement such a conception are not invariant under substitution of logical equivalents.
The positive thesis is that one can understand limiting-type reductions as *relations* between classes of models endowed with extra, topological (or topologically inspired) structure that encodes formally how those models are relevantly similar to one another. In a word, theory T reduces T' when the models of T' are arbitrarily similar to models of T -- they lie in the topological closure the models of T. Not only does this avoid the problems with syntactically focused account of limits and clarify the use of limits in the aforementioned debates, it also reveals an unnoticed point of philosophical interest, that the models of a theory themselves do not determine how they are relevantly similar: that must be provided from outside the formal apparatus of the theory, according to the context of investigation. I stress in conclusion that justifying why a notion of similarity is appropriate to a given context is crucial, as it may perform much of the work in demonstrating a particular reduction's success or failure.
I illustrate both negative and positive theses with the elementary case of the simple harmonic oscillator, gesturing towards their applicability to more complex theories, such as general relativity and other spacetime theories.
References:
Batterman, R. W. (1995). Theories between theories: Asymptotic limiting intertheoretic relations. Synthese 103:171-201.
Butterfield, J. (2011). Less is different: Emergence and reduction reconciled. Foundations of Physics 41(6):1065-1135.
Nagel, E. (1961). The Structure of Science: Problems in the Logic of Scientific Explanation. Hackett, Indianapolis.
Nickles, T. (1973). Two concepts of intertheoretic reduction. The Journal of Philosophy 70(7):181-201.
Empirical Underdermination for Physical Theories in C* Algebraic Setting: Comments to an Arageorgis's Argument
ABSTRACT. In this talk I intend to reconstruct an argument of Aristidis Arageorgis(1) against empirical underdetermination of the state of a physical system in a C*-algebraic setting and to explore its soundness. The argument, aiming against algebraic imperialism, the operationalist attitude which characterized the first steps of Algebraic Quantum Field Theory, is based on two topological properties of the state space: being T1 and being first countable in the weak*-topology. The first property is possessed trivially by the state space while the latter is highly non-trivial, and it can be derived from the assumption of the algebra of observables’ separability. I present some cases of classical and of quantum systems which satisfy the separability condition, and others which do not, and relate these facts to the dimension of the algebra and to whether it is a von Neumann algebra. Namely, I show that while in the case of finite-dimensional algebras of observables the argument is conclusive, in the case of infinite-dimensional von Neumann algebras it is not. In addition, there are cases of infinite-dimensional quasilocal algebras in which the argument is conclusive. Finally, I discuss Martin Porrmann's(2) construction of a net of local separable algebras in Minkowski spacetime which satisfies the basic postulates of Algebraic Quantum Field Theory.
(1) Arageorgis, A., (1995). Fields, Particles, and Curvature: Foundations and Philosophical Aspects of Quantum Field Theory in Curved Spacetime. Pittsburgh: University of Pittsburgh (PhD Dissertation)
(2) Porrmann, M. (2004). “Particle Weights and their Disintegration II”, Communications in Mathematical Physics 248: 305–333
Thou Shalt not Nudge: Towards an Anti-Psychological State
ABSTRACT. The neoclassical economics defines market failures as an uncompensated impact of one agent’s actions on the other agents’ well-being. The favored solution is the use of economic incentives like taxes and subsidies to correct these situations. Recently, the findings of behavioral economists have provided support for the argument that market failures should also comprise the cases where individuals harm themselves due to systematic mistakes they make (Sunstein 2014; Allcott and Sunstein 2015). Also, the set of regulatory tools should be expanded beyond economic incentives towards the use of subtle manipulation of the choice architecture (Thaler, Sunstein, and Balz 2014).
I argue that both of these steps would serve to increase the arbitrary power of the government and the fragility of the liberal democratic institutions. While it is easy to muster intuitive support for the claim that exploitation of systematic mistakes in decision-making is an inherent feature of the free market exchange (Akerlof and Shiller 2015), no one has yet succeeded in establishing a coherent and practically useful notion of ‘true preferences’ against which these mistakes could be defined (Sugden 2018). Thus, the concept of market failure due to self-harm is vague. Therefore, the government interventions to prevent these failures lack a general theoretical framework and, where applied, proceed on an ad hoc basis. Moreover, as far as individuals’ choice is no longer to be taken at face value, the voters’ choices can be contested at least as easily as the consumers’ choices (Brennan 2016). Use of nudges instead of economic incentives to bring people’s choices closer to their nebulous true preferences lowers the transparency of the intervention and increases the temptations to misuse it to strengthen the incumbents’ hold on political power (Schubert 2017).
I propose to use the government’s regulatory power to preempt the most dangerous manipulative techniques rather than to engage the government in them. Such ‘anti-psychological’ role has significant advantages. Regulation of the forms of commercial (and political) communication can capitalize on the scientific knowledge of human cognitive limitations, and yet avoids the necessity to establish what the true preferences are. It also takes a form of general rules which are more transparent than measures that need to target particular situations.
References
Akerlof, George A., and Robert J. Shiller. 2015. Phishing for Phools: The Economics of Manipulation and Deception. Princeton: Princeton University Press.
Allcott, Hunt, and Cass R. Sunstein. 2015. “Regulating Internalities.” Journal of Policy Analysis and Management 34(3):698–705. https://doi.org/10.1002/pam.21843.
Brennan, Jason. 2016. Against Democracy. Princeton: Princeton University Press.
Schubert, Christian. 2017. “Exploring the (Behavioural) Political Economy of Nudging.” Journal of Institutional Economics 13(3):499–522. https://doi.org/10.1017/S1744137416000448.
Sugden, Robert. 2018. The Community of Advantage: A Behavioural Economist’s Defence of the Market. New product edition. New York, NY: Oxford University Press.
Sunstein, Cass R. 2014. Why Nudge? The Politics of Libertarian Paternalism. New Haven: Yale University Press.
Thaler, Richard H., Cass R. Sunstein, and John P. Balz. 2014. “Choice Architecture.” SSRN Scholarly Paper ID 2536504. Rochester, NY: Social Science Research Network. https://papers.ssrn.com/abstract=2536504.
Utopias in the context of social technological inquiry
ABSTRACT. This paper elaborates on Otto Neurath’s proposal that utopias can be used in social scientific and technological methodology of research. In Foundations of the Social Sciences (1944), Neurath claims that such imaginative works may provide means for scientists to overcome the limitations of existing social arrangements in order to devise alternatives to experienced problematic situations.
I compare this point of view with the work scientists do with models and nomological machines in Nancy Cartwright’s conception, as presented in The Dappled World (1999). That is, utopias are abstractions that depict the complexity of social arrangements and that provide idealized situations to which our understanding of some aspects of society applies. As models, they enable scientists to visualize the functioning of social scientific laws and generalizations, as well as new possibilities, since models allow the operation of features and consequences of the abstract arrangements. In this operation scientists acquire knowledge not only of imagined arrangements, but also of concrete social institutions, since models mediate between more abstract and more concrete parts of scientific experience. But how does this mediation take place? That is, why is that knowledge valid?
A common answer to this question in the recent controversy on models in philosophy of science assumes some form of (more or less mitigated) scientific realism: that scientific models represent some features of reality. Such an answer can be found in Cartwright’s proposals, since she claims that scientific models and nomological machines instantiate real capacities of the modeled systems. This stance seems not to be compatible with an account of the complexity of social situations, which have many concurring causes that are not always describable in mathematical terms. In other words, social arrangements do not present the stability that Cartwright’s models and nomological machines seem to require.
An approach of utopias as models is meant to bring together scientific and literary social thought. A realist claim, such that science apprehends some aspects of reality while literature does not, offers too sharp a line between these modes of social reasoning. Nevertheless an appropriate account of social scientific models must offer a way to distinguish between models in scientific investigations and utopias when they are regarded as fictional works.
My suggestion is that this problem is properly addressed by considering the pragmatic contexts of inquiry in which utopias as models of social science and technology appear. In this paper I am going to develop this suggestion by drawing inspiration from the works of John Dewey as well as from some recent theories of inquiry. In this perspective, scientific abstract constructions are to be considered as answers to experienced problematic situations and as projected courses of action to deal with social problems. The difference in regard to utopias as works of art is not in the composition of the abstraction, but in the context of inquiry that they elicit. By focusing on the context of inquiry, this approach dismisses the need for realist claims, in the spirit of Neurath’s well-known anti-metaphysical stance.
ABSTRACT. A thriving literature has developed over logical and mathematical pluralism (LP and MP, respectively) – i.e. the views that several rival logical and mathematical theories can be correct. However, these have unfortunately grown separate; we submit that, instead, they both can greatly gain by a closer interaction.
To show this, we present some new kinds of MP modeled on parallel ways of substantiating LP, and vice versa. We will use as a reference abstractionism in the philosophy of mathematics (Wright 1983). Abstractionists seek to recover as much mathematics as possible from abstraction principles (APs), viz. quantified biconditionals stating that two items have the same abstract just in case they belong to the same equivalence class; e.g. Hume’s Principle (HP), which states that two concepts have the same cardinal number iff they can be put into one-to-one correspondence (Frege 1884, §64). The proposed new forms of pluralism we will advance can fruitfully be clustered as follows:
1. CONCEPTUAL PLURALISM – From LP to MP: Just as LPs argue that different relations of logical consequence are equally legitimate by claiming that the notion of validity is underspecified (Beall & Restall 2006) or polysemous (Shapiro 2014), abstractionists might deem more than one version of HP acceptable by stating that the notion of “just as many” – and, consequently, of cardinal number – admits of different precisifications.
2. DOMAIN PLURALISM – From MP to LP: Just as MPs claim that rival mathematical theories can be true of different domains (Balaguer 1998), it could be argued that each version of HP introduces its own domain of cardinal numbers, and that the results these APs yield might differ with respect to some domains, and match with respect to some others (e.g., of finite and infinite cardinals). The proposal, in turn, prompts some reflections on the sense of “rivalry” between the logics accepted by LPs, which often agree on some laws, while diverging on others. Is the weaker logic genuinely disagreeing or just silent on the disputed rule? Do rival logicians employ the same notion of consequence in those rules about which they agree or, given some inferentialist view, always talk past each other?
3. CRITERIA PLURALISM – From LP to MP, and back: Another form of pluralism about abstractions could be based on the fact that more than one AP is acceptable with respect to different criteria (e.g. irenicity, conservativity, simplicity); accordingly, LP has so far been conceived as the claim that more than one logic satisfies a single set of requirements, but a new form of LP could arise from the acceptance of several legitimacy criteria themselves (e.g. compliance with our intuitions on validity, accordance with mathematical practice).
These views – besides, we will argue, being in and of themselves attractive – help expanding and clarifying the spectrum of possibilities available to pluralists in the philosophy of both logic and mathematics; as a bonus, this novel take can be shown to shed light on long-standing issues regarding LP and MP – in particular, respectively, the “collapse problem” (Priest 1999) and the Bad Company Objections (Linnebo 2009).
References
Balaguer, M. (1998). Platonism and Anti-Platonism in the Philosophy of Mathematics. OUP.
Beall, JC and Restall, G. (2006). Logical Pluralism. OUP.
Frege, G. (1884). The Foundations of Arithmetic, tr. by J. Austin, Northwestern University Press, 1950.
Linnebo, Ø. (2009). Introduction to Synthese Special Issue on the Bad Company Problem, 170(3): 321-9.
Priest, G. (1999). “Logic: One or Many?”, typescript.
Shapiro, S. (2014). Varieties of Logic. OUP.
Wright, C. (1983). Frege’s Conception of Numbers as Objects, Aberdeen UP.
ABSTRACT. I conceive logic as a formal presentation of a guide to undertaking a rational practice, a guide which itself is constituted by epistemic norms and their consequences. The norms themselves may be conceived in a non-circular manner with a naturalistic account, and we use Hilary Kornblith's: epistemic norms are "hypothetical imperatives" informed by instrumental desires "in a cognitive system that is effective at getting at the truth" ([1]). What I mean by "formal" is primarily what John MacFarlane refers to in his PhD thesis [2] as the view that logic "is indifferent to the particular identities of objects", taken together with MacFarlane's intrinsic structure principle and my own principle that logic is provided by the norms that constitute a rational practice.
The view that logic is provided by constitutive norms for a rational practice helps us respond to a popular objection to logical pluralism, the collapse argument ([3], chapter 12). Logic here has been misconceived as starting with a given situation and then reasoning about it. Instead we start with our best known practice to suit an epistemic goal, and ask how to formalise this practice.
This view of logic provides a starting point for an account of the normativity of logic: assuming we ought to follow the guide, we ought to accept the logic's consequences. If we cannot, we must either revise either the means of formalisation or some of the epistemic norms that constitute the guide. Revision might be performed either individually or on a social basis, comparable to Novaes' conception in [4]. Mutual understanding of differences emerges from the practice-based principle of interpretive charity: we make the best sense of others when we suppose they are following epistemic norms with maximal epistemic utility with respect to our possible interpretations of what their instrumental desires could be.
One might ask what the use is of logic as a formalisation of good practice rather than good practice in itself. Indeed Teresa Kouri Kissel in [5] takes as a motto that "we ought not to legislate to a proper, functioning, science". Contrary to this, my response is that logic provides evidence for or against our conception of good practice, and can thus outrun our own intuitions of what good practice is. Implementations of intuitionistic logic manifested in proof assistants such as Coq have proved themselves capable of outrunning intuitions of good mathematical practice in the cases of particularly long proofs (see for instance [6]).
[1] Kornblith, Hilary, "Epistemic Normativity", Synthese, Vol. 94, pp. 357-376, 1993.
[2] MacFarlane, John, What Does It Mean That Logic Is Formal, PhD thesis University of Pittsburgh, 2000.
[3] Priest, Graham, Doubt Truth to Be a Liar, 2009.
[4] Dutilh Novaes, Catarina, "A Dialogical, Multi-Agent Account of the Normativity of Logic", Dialectica, Vol. 69, Issue 4, pp. 587-609, 2015.
[5] Kouri Kissel, Teresa, Logical Instrumentalism, PhD thesis Ohio State University, 2016.
[6] Gonthier, Georges, "Formal Proof—The Four Color Theorem", Notices of the American Mathematical Society, Vol. 55, No. 11, pp. 1382-1393, 2008.
ABSTRACT. I will discuss questions “What they call and what must be named a “philosophical logic”, and is it possible and relevant”. After analyze the usage of term “philosophical logic” and objection against its usage I will explain my conception about “philosophical logic”
The main questions are “Are there a significant field of study for what there is no suitable term”, “where is this field of study named “philosophical logic”, “Is “philosophical logic” a (kind of) logic, or it is philosophy but not logic.
There are 2 main reasons for the term “philosophical logic” – “Scientific and Theoretical” and Social and practical.
Theoretical reasons - There are a significant problem fields in XX-th century logic. New philosophical problematic was developed because of the paradoxes in set theory and the limitative theorems (Tarski, Gödel). They necessitated the elaboration of new philosophical and conceptual investigation into the methods, nature and subject of mathematics and logic and the broad epistemological topics connected with them. But the really new research field in logic was non-classical logic. The non-classical logic and especially modal logic became central topic of logical research in the Xx-th century and by the same token was considered as highly significant for philosophy.
Social and practical reasons - Approving a such term is convenient for a two group of scientist for their work and career.
- Philosopher with good knowledge in some other fields - ontology, epistemology, philosophy of science, with traditional training in logic.
- Scientists with good skills in formal (mathematical) method, frequently with firmly mathematical education working in the field of non-classical logic’s, which find job as logic lecturers in philosophical departments. Non-classical logic’s are not related to the logic of mathematics, they (with the exception of intuitionistic logic) do not serve as the basis of mathematical theory, that is why most mathematicians for a long time were not interested in non-classical logic’s. In mathematics there is no contextual ambiguity and modality, such notions are not interesting for the mathematicians. This led to the employment of logicians interested in such problematic in philosophical faculties.
And also term sounds impressive and hardly provokes objection from deans and foundation.
Objections
The main objections against the term “Philosophical logic” are “It in unnecessary - “Logic” is well enough in all cases.”; ”Which problems belong to logic but not to “Philosophical logic”, “are papers of Aristotle’s, (Frege. Hilbert) “philosophical logic”?”; “If Logic is a part of philosophy, so why we must Restrict logic through the more general concept “Philosophy” “.
I see four expressive interpretations of the term “Philosophical logic”
-Philosophical logic as (some types of) logic and studding logical systems with connection to -philosophy. Especially it as the logic, which investigate nonmathematical reasoning.
-“Philosophical logic” is “The logic in (of) philosophy” and explores the rules of the logical inference, the modes of deduction from and in philosophy.
“Philosophical logic” as “Philosophy in logic”;
-“Philosophical logic” as “Philosophy of logic”.
14:30
Olga Karpinskaia (Foundation for Humanities Research and Technologies, Russia)
Abstract and concrete concepts: an approach to classification
ABSTRACT. 1. Term logic, called also traditional or Aristotelian logic, deals with terms as the main components of propositions. Terms are considered to represent ideas or concepts. Concepts are usually divided into concrete and abstract ones. Concrete concepts (such as Socrates, tree, table, etc.) are taken to refer to things or objects, and abstract concepts (such as wisdom, truth, etc.) refer to properties of objects. Objects, in contrast to properties, have independent being.
V. Bocharov and V. Markin in their textbook «Elements of Logic» (Moscow, 1994) propose a refinement of this classification by distinguishing between various kinds of objects. Namely, they consider individuals, n-tuples of individuals and sets of individuals to be different types of objects, which can have properties and enter into relations with each other. Then a concept is said to be concrete if and only if its extension consists of individuals, n-tuples of individuals or sets of individuals, and it is abstract otherwise.
2. This classification is problematic, since it does not fit a common-sense idea of abstract/concrete distinction, and moreover, it finds no support in observations of modern psychology. Specifically, according to Bocharov-Markin’s definition, such abstract objects as numbers, geometrical figures, truth values, etc. are considered to be individuals, and thus, the concepts about them should be recognized as concrete side by side with concepts about tables, chairs and trees. Moreover, a concept about concept will then also be concrete, what is counterintuitive. Besides, one and the same concept can appear both concrete and abstract, depending on different treatment of the corresponding objects. It seems also difficult to differentiate between concepts having different types of abstractness, such as friendship and symmetrical relation.
3. I propose an approach to the concept classification based on a metaphysical division between particulars and universals. Accordingly, the concepts can be divided into logically concrete and logically abstract. As usual, particulars can be defined as concrete, spatiotemporal entities accessible to sensory perception, as opposed to abstract entities, such as properties or numbers. Then a concept is logically concrete if and only if its extension consists of particulars, and it is logically abstract otherwise. Thus, logically abstract concepts concern various kinds of universals, such as sets of particulars, properties, relations, abstract objects, theirs n-tuples, sets, etc. This approach makes it possible to differentiate between levels of abstractness. Thus, concepts about properties, relations and functional characteristics will be less abstract then, for example, concepts about properties of properties, etc.
4. The proposed idea of the concept classification based on the types of generalized objects opens further opportunities for a natural language analysis and determining a degree (level) of abstractness of the given discourses and domains.
ABSTRACT. As a result of the “practical turn” in the philosophy of mathematics, a significant part of the research activity of the field consists in the analysis of all sorts of mathematical corpora. The problem of mathematical textuality (inscriptions, symbols, marks, diagrams, etc.) has thus gained an increasing importance as decisive aspects of mathematical knowledge have been shown to be related to regularities and emergent patterns identifiable at the level of mathematical signs in texts. However, despite the fruitfulness of text-driven approaches in the field, the concrete tools available for the analysis of actual mathematical texts are rather poor and difficult to employ objectively. Moreover, analytical techniques borrowed from other fields, such as computational linguistics, NLP, logic or computer science, often present problems of adaptability and legitimacy.
Those difficulties reveal a lack of clear foundations for a theory of textuality that can provide concrete instruments of analysis, general enough to deal with mathematical texts. In this work, we intend to tackle this problem by proposing a novel conceptual and methodological framework for the automatic treatment of texts, based on a computational implementation of an analytical procedure inspired by the classic structuralist theory of signs. Guided by the goal of treating mathematical texts, our approach assumes a series of conditions for the elaboration of the intended analytical model. In particular, the latter should rely on a bottom-up approach; be unsupervised; be able to handle multiple sign regimes (e.g. alphabetical, formulaic, diagrammatical, etc.); be oriented towards the identification of syntactic structures; capture highly stable regularities; and provide an explicit account of those regularities.
A major obstacle the vast majority of existing NLP models present to match those requirements resides in the primacy accorded to words as fundamental units of language. The main methodological hypothesis of our perspective is that basic semiological units should not be assumed (e.g. as words in a given dictionary) but discovered as the result of a segmentation procedure. The latter not only allows to capture generic units of different levels (graphical, morphological, lexical, syntactical, etc.) in an unsupervised way, but also provides a more complex semiological context for those units (i.e. units co-occurring with a given unit within a certain neighborhood). The task of finding structural features can thus be envisaged as that of identifying plausible ways of typing those units, based on a duality relation between units and contexts within the segmented corpus. More precisely, two terms are considered of the same type if they are bi-dual with respect to contexts. The types thus defined can then be refined by considering their interaction, providing an emergent complex type structure that can be taken as the abstract grammar of the text under analysis.
In addition to providing a conceptual framework and concrete automated tools for textual analysis, our approach puts forward a novel philosophical perspective in which logic appears as a necessary intermediary between textual properties and mathematical contents.
Bibliography
Juan Luis Gastaldi. Why can computers understand natural language. Philosophy & Technology. Under review.
Jean-Yves Girard et al. Proofs and types. Cambridge University Press, New York, 1989.
Zellig Harris. Structural linguistics. University of Chicago Press, Chicago, 1960.
Louis Hjelmslev. Résumé of a Theory of Language. Number 16 in Travaux du Cercle linguistique de Copenhague. Nordisk Sprog-og Kulturforlag, Copenhagen, 1975.
Tomas Mikolov et al. Distributed representations of words and phrases and their compositionality. CoRR, abs/1310.4546, 2013.
Peter D. Turney et al. From frequency to meaning: Vector space models of semantics. CoRR, abs/1003.1141, 2010.
Entering the valley of formalism: Results from a large-scale quantitative investigation of mathematical publications
ABSTRACT. As pointed out by Reuben Hersh (1991) there is a huge difference between the way mathematicians work and the way they present their results. In a previous qualitative study on mathematical practice we confirmed this result by showing that although mathematicians frequently use diagrams and figures in their work process, they tend to downplay these representations their published manuscripts, in part because they feel subjected to genre norms and values when they prepare their work for publication (AUTHOR and ANONYMIZED 2016; Accepted). This result calls for a better understanding of these genre norms and for the development the norms may undergo over time.
From a casual point of view, it may seem that the norms are currently in a process of change. The formalistic claim that figures and diagrams are superfluous has been contested by philosophers of mathematics (e.g. Brown 1999, Giaquiont 2007), and looking at mathematics journals and textbooks, one gets the impression that diagrams and figures are being used more frequently. That however is merely an impression, as we do not have solid empirical data tracking the representational style used in mathematics texts.
In order to fill this gab ANONYMIZED, ANONYMIZED and AUTHOR developed a classification scheme that makes it possible to distinguish between the different types of diagrams used in mathematics based on the cognitive support they offer (AUTHOR et al 2018). The classification scheme is designed to facilitate large-scale quantitative investigations of the norms and values expressed in the publication style of mathematics, as well as trends in the kinds of cognitive support used in mathematics. We presented the classification scheme at conferences and workshops during the summer 2018 to get feedback from other researchers in the field. After minor adjustments we applied the scheme to track the changes in publication style in the period 1885 to 2015 in the three mathematics journals Annals of Mathematics, Acta Mathematica and Bulletin of the AMS.
In this talk I will present the main results of our investigation, and I will discuss the advantages and disadvantages of our method as well as the possible philosophical implications of our main results.
Literature
• Hersh, R. (1991): Mathematics has a front and a back. Synthese 80(2), 127-133.
• AUTHOR and ANONYMIZED (2016): [Suppressed for review]
• AUTHOR and ANONYMIZED (Accepted): [Suppressed for review]
• Brown, J. R. (1999): Philosophy of mathematics, an introduction to a world of proofs and pictures. Philosophical Issues in Science. London: Routledge.
• Giaquinto, M. (2007): Visual Thinking in mathematics, an epistemological study. New York: Oxford University Press.
• AUTHOR, ANONYMIZED and ANONYMIZED (2018): [Suppressed for review]
ABSTRACT. A persistent theme in defense of deliberation as a process of collective decision making is the claim that voting cycles, and more generally Arrowian impossibility results can be avoided by public deliberation prior to aggregation [2,4]. The argument is based on two observations. First is the mathematical fact that pairwise majority voting always outputs a Condorcet winner when the input preference profile is single-peaked. With its domain restricted to single-peaked profiles pairwise majority voting satisfies, alongside the other Arrowian conditions, rationality when the number of voters is odd [1]. In particular, it does not generate voting cycles. Second are the conceptual arguments [4, 2] and the empirical evidence that deliberation fosters the creation of single-peaked preferences [3], which is often explained through the claim that group deliberation helps creating meta-agreements [2]. These are agreements regarding the relevant dimensions along which the problem at hand should be conceptualized, as opposed to a full consensus on how to rank the alternatives, i.e. a substantive agreement. However, as List [2] observes, single-peakedness is only a formal structural condition on individual preferences. Although single-peaked preferences do entail the existence of a structuring dimension, this does not mean that the participant explicitly agree on what that dimension is. As such single-peakedness does not reflect any joint conceptualization, which is necessary for meta-agreement. Achieving meta-agreement usually requires the participants to agree on the relevant normative or evaluative dimension for the problem at hand. This dimensions will typically reflect a thick concept intertwining factual with normative and evaluative questions, for instance health, well-being, sustainability, freedom or autonomy, to name a few. It seems rather unlikely that deliberation will lead the participants to agree on the meaning of such contested notions. Of course, deliberative democrats have long observed that public deliberation puts rational pressure on the participants to argue in terms of the common good [4], which might be conducive of agreement on a shared dimension. But when it comes to such thick concepts this agreement might only be a superficial one, involving political catchwords, leaving the participants using their own, possibly mutually incompatible understanding of them [5]. All of this does not exclude the fact that deliberation might make it more likely, in comparison with other democratic procedures, to generate single-peaked preferences from meta-agreements. The point is rather that by starting from the latter one puts the bar very high, especially if there appear to be other ways to reach single-peaked preferences or to avoid cycles altogether. In view of this two questions arise regarding the claim that deliberation helps avoiding cycles:
Q1: Can cycles be avoided by pre-voting deliberation in cases where they are comparatively more likely to arise, namely in impartial cultures i.e. where a voter picked at random is equally likely to have any of the possible strict preference orderings on the alternatives?
Q2: If yes, are meta-agreements or the creation of single-peaked preferences necessary or even helpful for that?
In this work we investigate these questions more closely. We show that, except in case where the participants are extremely biased towards their own opinion, deliberation indeed helps to avoid cycles. It does so even in rather unfavourable conditions, i.e. starting from an impartial culture and with participants rather strongly biased towards themselves. Deliberation also creates single-peaked preferences. Interestingly enough, however, this does not appear particularly important for avoiding cycles. Most if not all voting cycles are eliminated, but not by reaching single-peaked preferences. We show this in a minimalistic model of group deliberation in which the participants repeatedly exchange, and rationally update their opinions. Since this model completely abstracts away from the notion of meta agreement, it provides an alternative, less demanding explanation as to how pre-voting deliberation can avoid cyclic social preferences, one that shifts the focus from the creation of single-peaked preferences to rational preference change and openness to change one's mind upon learning the opinion of others.
References
[1] K. J. Arrow. Social Choice and Individual Values. Number 12. Yale University Press, 1963.
[2] C. List. Two concepts of agreement. The Good Society, 11(1):72–79, 2002.
[3] C. List, R. C. Luskin, J. S. Fishkin, and I. McLean. Deliberation, single-peakedness, and the possibility of meaningful democracy: evidence from deliberative polls. The Journal of Politics, 75(1):80–95, 2012.
[4] D. Miller. Deliberative democracy and social choice. Political studies, 40(1 suppl):54–67, 1992.
[5] V. Ottonelli and D. Porello. On the elusive notion of meta-agreement. Politics, Philosophy & Economics, 12(1):68–92, 2013.
15:45
Alexandru Baltag (Institute for Logic, Language and Computation, Netherlands) Soroush Rafiee Rad (Bayreuth University, Germany) Sonja Smets (Institute for Logic, Language and Computation, Netherlands)
Learning Probabilities: A Logic of Statistical Learning
ABSTRACT. We propose a new model for forming beliefs and learning about unknown probabilities (such as the probability of picking a red marble from a bag with an unknown distribution of coloured marbles). The most widespread model for such situations of `radical uncertainty' is in terms of imprecise probabilities, i.e. representing the agent's knowledge as a set of probability measures. We add to this model a plausibility map, associating to each measure a plausibility number, as a way to go beyond what is known with certainty and represent the agent's beliefs about probability. There are a number of standard examples: Shannon Entropy, Centre of Mass etc. We then consider learning of two types of information: (1) learning by repeated sampling from the unknown distribution (e.g. picking marbles from the bag); and (2) learning higher-order information about the distribution (in the shape of linear inequalities, e.g. we are told there are more red marbles than green marbles). The first changes only the plausibility map (via a `plausibilistic' version of Bayes' Rule), but leaves the given set of measures unchanged; the second shrinks the set of measures, without changing their plausibility. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds. But our belief change does not comply with standard AGM axioms, since the revision induced by (1) is of a non-AGM type. This is essential, as it allows our agents to learn the true probability: we prove that the beliefs obtained by repeated sampling converge almost surely to the correct belief (in the true probability). We end by sketching the contours of a dynamic doxastic logic for statistical learning.
De Finetti meets Popper or Should Bayesians care about falsificationism?
ABSTRACT. Views of the role of hypothesis falsification in statistical testing do not divide as cleanly between frequentist and Bayesian views as is commonly supposed. This can be shown by considering the two major variants of the Bayesian approach to statistical inference and the two major variants of the frequentist one.
A good case can be made that the Bayesian, De Finetti, just like Popper, was a falsificationist. A thumbnail view, which is not just a caricature, of De Finetti’s theory of learning, is that your subjective probabilities are modified through experience by noticing which of your predictions are wrong, striking out the sequences that involved them and renormalising.
On the other hand, in the formal frequentist Neyman-Pearson approach to hypothesis testing, you can, if you wish, shift conventional null and alternative hypotheses, making the latter the straw-man and by ‘disproving’ it, assert the former.
The frequentist, Fisher, however, at least in his approach to testing of hypotheses, seems to have taken a strong view that the null hypothesis was quite different from any other and there was a strong asymmetry on inferences that followed from the application of significance tests.
Finally, to complete a quartet, the Bayesian geophysicist Jeffreys, inspired by Broad, specifically developed his approach to significance testing in order to be able to ‘prove’ scientific laws.
By considering the controversial case of equivalence testing in clinical trials, where the object is to prove that ‘treatments’ do not differ from each other, I shall show that there are fundamental differences between ‘proving’ and falsifying a hypothesis and that this distinction does not disappear by adopting a Bayesian philosophy. I conclude that falsificationism is important for Bayesians also, although it is an open question as to whether it is enough for frequentists.
15:45
Timothy Childers (Institute of Philosophy, Czech Academy of Sciences, Czechia)
Comment on “De Finetti meets Popper”
ABSTRACT. The claim that subjective Bayesianism is a form of falsificationism in which priors are rejected in light of evidence clashes with a fundamental symmetry of Bayesianism, p(A) = 1 – p(A), tying together confirmation and falsification of a hypothesis. Wherever there is disconfirmation, there is falsification. This duality arises from the Boolean structure of a probability space. While Popper holds that there is a fundamental asymmetry between the two, subjective Bayesianism must view confirmation and disconfirmation as two sides of the same coin. Moreover, the standard account is that priors are neither falsified nor verified, but are revised in light of accepted evidence, again, dually.
That said, there are forms of Bayesianism that are closer to Popper’s methodology. In particular, one form, which we might term London Bayesianism, has more in common with the Popperian approach than is generally recognized. (I choose the name since this is where it originated, especially in the work of Colin Howson and Peter Urbach: I take it as opposed to the Princeton Bayesianism of David Lewis and his students). This form of Bayesianism is motivated by the acceptance of a negative solution to the problem of induction and a deep scepticism towards mechanical approaches to scientific methodology. In particular, Howson’s rejection of diachronic conditionalization in favour of synchronic conditionalization shifts Bayesianism toward a generalized account of consistency at a given time, away from a view of Bayes Theorem as providing the ideal account of learning from experience (whatever that might be). This also leads to the rejection of countable additivity both by Howson and others.
Finally, as the author points out, standard statistical methodology incorporates an adjustable parameter (levels of significance) for which no independent justification is given. Thus it exemplifies an ad hoc solution to scientific discovery, and so cannot be seen as taking falsifications seriously.
Dennis Apolega (Philippine Normal University, Philippines)
CANCELLED: Does Scientific Literacy Require a Theory of Truth?
ABSTRACT. From "flat earthers" to "anti-vaxxers", to the hoax cures and diets in social media, the importance of scientific literacy cannot be emphasized enough. On the one hand, this informs one of the challenges of those in science education. Changes in teaching approaches and addressing deficiencies in the curriculum may be done. On the other hand, this opens the discussion to epistemological questions of truth and knowledge. Easily one can go to “The earth is flat is false”, or “Baking soda is not a treatment for cancer” for these kinds of discussions that would involve scientific literacy and epistemology.
This paper aims to show that while scientific literacy may benefit from discussions of epistemological issues, it does not require a theory of truth. This appears counterintuitive since there is a view that epistemology needs to account for the success of science in terms of its being truth conducive. This is the view that Elgin (2017) calls veritism. Following Elgin, some of the problems with veritism in science will be discussed in terms of their relevance to scientific literacy.
Popularizers of science would also probably object to this position, that a theory of truth is not required for scientific literacy. Especially since this paper, would also look back at Rorty’s (1991) views on science to further buttress its position. Indeed, Rorty’s views on science may prove more relevant to issues in scientific literacy than to science itself.
REFERENCES
Elgin, Catherine. 2017. True Enough. MIT Press.
Rorty, Richard. 1991. Objectivity, Relativism and Truth. Cambridge University Press.
ABSTRACT. Can teaching have any impact on students' willingness to embrace pseudo-scientific claims? And if so, will this impact be significant. This paper aims to present an ongoing research conducted in two countries and four universities which aims to answer these questions. The research is based on a previous work McLaughlin & McGill (2017). They conducted a study among university students which seems to show that teaching critical thinking can have a statistically significant impact on the acceptance of pseudo-scientific claims by students. They compared a group of students that attended a course on critical thinking and pseudo-scientific theories with a control group of students who attended a course on a general philosophy of science using the same questionnaire containing the pseudo-scientific claims. The questionnaire was administered at the onset of the semester (along with a Pew Research Center Science Knowledge Quiz), and then at the end of the semester. While there was no significant change in a degree of belief in pseudo-scientific claims in the control group, the experimental group showed a statistically significant decrease in belief in pseudo-scientific claims.
In the first phase of our research, we conducted a study similar to that of McLaughlin & McGill, though we were not able to replicate their results. There was no significant change in belief in pseudo-scientific claims among the study's participants. This, in our opinion, is due to the imperfections and flaws in both our and McLaughlin & McGills studies.
In this paper, we would like to present our research along with the results obtained during its first phase. We will also discuss the shortcomings and limitations of our research and the research it is based on. Finally, we would like to present and discuss future plans for the next phase of our research into the teaching of critical thinking and its transgression of critical thinking in cases focusing on humanities and science.
McLaughlin, A.C. & McGill, A.E. (2017): Explicitly Teaching Critical Thinking Skills in a History Course. Science & Education 26(1–2), 93–105.
Adam, A. & Manson, T. (2014): Using a Pseudoscience Activity to Teach Critical Thinking. Teaching of Psychology 41(2), 130–134.
Tobacyk, J. (2004): A revised paranormal belief scale. International Journal of Transpersonal Studies 23, 94–98.
A plurality of methods in the philosophy of science: how is that possible?
ABSTRACT. In this talk, the place of logical and computational methods in the philosophy of science shall be reviewed in connection to the emergence of the cognitive sciences. While the interaction of several disciplines was breeding ground for diverse methodologies, the challenge to methodologists of science to provide a general framework, still remains.
As is well-known, the distinction between the context of discovery and the context of justification, which served as a basis for logical positivism, left out of its research agenda –specially from a formal perspective-- a very important part of scientific practice, that which includes issues related to the generation of new theories and scientific explanations, concept formation as well as other aspects of discovery in science.
Some time later, in the seventies of last century, cognitive scientists revived some
of the forgotten questions related to discovery in science within research topics such as mental patterns of discovery and via computational models, like those found for hypothesis formation. This helped to bring scientific discovery to the fore as a central problem in the philosophy of science. Further developments in computer science, cognitive science and logic itself, provided a new set of tools of a logical and computational nature. The rather limited logic used by the Vienna Circle was now augmented by the discovery of quite new systems of logic, giving place to a research context in which computer science, logic and philosophy of science interacted, each of them providing its own methodological tools to the service of philosophy of science.
A further interaction arised in the eighties of last century, in this case between logic and history, giving place to computational philosophy of science, a space in which history and computing met on a par. This created the possibility of a partial
synthesis between the logical and the historical approaches with a new computational
element added to the mix.
The present setting, in which we have all logical, historical and computational
approaches to philosophy of science, fosters the view that what we need is a balanced philosophy of science, one in which we take advantage of a variety of methodologies, all together giving a broad view of science. However, it is not at all clear how is that such plurality of methods can be successfully integrated.
Liability Without Consciousness? The Case of a Robot
ABSTRACT. It is well known that the law punishes those who caused harm to someone else. However, the criteria for punishment becomes complicated when applied to non-human agents. When talking about non-human agency we primarily have in mind robot agents. Robot agency could be reasonably defended in terms of liability, the mental state of being liable. The roots of the problem should be looked for when defining robots’ ability to have mental states but even when we put this particular problem aside the question of liability seems to be of a crucial value while discussing a harm-causing technology.
Since the question of liability requires special attention to the domain of mental states, we argue that it is crucial for the legal domain to define the legal personhood of a robot. We should try to answer the question – what constitutes a legal person in terms of non-human agency? If legal personhood is the ability to have legal rights and obligations, how can we ascribe these human qualities to a non-human agent? Are computing paradigms able to limit robots’ ability to cause harm? If so, can legal personhood still be ascribed (having in mind that computing could be limiting the free will)?
These questions are of the highest importance when thinking about if we should punish a robot, and how this punishment could function having in mind the non-human personhood.
ABSTRACT. The design of explicit ethical agents [7] is faced with tough philosophical and practical challenges. We address in this work one of the biggest ones: How to explicitly represent ethical knowledge and use it to carry out complex reasoning with incomplete and inconsistent information in a scrutable and auditable fashion, i.e. interpretable for both humans and machines. We present a case study illustrating the utilization of higher-order automated reasoning for the representation and evaluation of a complex ethical argument, using a Dyadic Deontic Logic (DDL) [3] enhanced with a 2D-Semantics [5]. This logic (DDL) is immune to known paradoxes in deontic logic, in particular "contrary-to-duty" scenarios. Moreover, conditional obligations in DDL are of a defeasible and paraconsistent nature and thus lend themselves to reasoning with incomplete and inconsistent data.
Our case study consists of a rational argument originally presented by the philosopher Alan Gewirth [4], which aims at justifying an upper moral principle: the "Principle of Generic Consistency" (PGC). It states that any agent (by virtue of its self-understanding as an agent) is rationally committed to asserting that (i) it has rights to freedom and well-being; and that (ii) all other agents have those same rights. The argument used to derive the PGC is by no means trivial and has stirred much controversy in legal and moral philosophy during the last decades and has also been discussed as an argument for the a priori necessity of human rights. Most interestingly, the PGC has lately been proposed as a means to bound the impact of artificial general intelligence (AGI) by András Kornai [6]. Kornai's proposal draws on the PGC as the upper ethical principle which, assuming it can be reliably represented in a machine, will guarantee that an AGI respects basic human rights (in particular to freedom and well-being), on the assumption that it is able to recognize itself, as well as humans, as agents capable of acting voluntarily on self-chosen purposes.
We will show an extract of our work on the formal reconstruction of Gewirth's argument for the PGC using the proof assistant Isabelle/HOL (a formally-verified, unabridged version is available in the Archive of Formal Proofs [8]). Independent of Kornai's claim, our work exemplarily demonstrates that reasoning with ambitious ethical theories can meanwhile be successfully automated. In particular, we illustrate how it is possible to exploit the high expressiveness of classical higher-order logic as a metalanguage in order to embed the syntax and semantics of some object logic (e.g. DDL enhanced with quantication and contextual information) thus turning a higher-order prover into a universal reasoning engine [1] and allowing for seamlessly combining and reasoning about and within different logics (modal, deontic, epistemic, etc.). In this sense, our work provides evidence for the flexible deontic logic reasoning infrastructure proposed in [2].
References
1. C. Benzmüller. Universal (meta-)logical reasoning: Recent successes. Science of Computer Programming, 172:48-62, March 2019.
2. C. Benzmüller, X. Parent, and L. van der Torre. A deontic logic reasoning infrastructure. In F. Manea, R. G. Miller, and D. Nowotka, editors, 14th Conference on Computability in Europe, CiE 2018, Proceedings, volume 10936 of LNCS, pages 60-69. Springer, 2018.
3. J. Carmo and A. J. Jones. Deontic logic and contrary-to-duties. In Handbook of Philosophical Logic, pages 265-343. Springer, 2002.
4. A. Gewirth. Reason and morality. University of Chicago Press, 1981.
5. D. Kaplan. On the logic of demonstratives. Journal of Philosophical Logic, 8(1):81-98, 1979.
6. A. Kornai. Bounding the impact of AGI. Journal of Experimental & Theoretical Artificial Intelligence, 26(3):417-438, 2014.
7. J. Moor. Four kinds of ethical robots. Philosophy Now, 72:12-14, 2009.
8. XXXXXXX. Formalisation and evaluation of Alan Gewirth's proof for the principle of generic consistency in Isabelle/HOL. Archive of Formal Proofs, 2018.
Robin Kopecký (Charles University, The Karel Čapek Center for Values in Science and Technology, Czechia) Michaela Košová (Charles University, The Karel Čapek Center for Values in Science and Technology, Czechia)
How virtue signalling makes us better: Moral preference of selection of types of autonomous vehicles.
ABSTRACT. In this paper, we present a study on moral judgement on autonomous vehicles (AV). We employ a hypothetical choice of three types of “moral” software in a collision situation (“selfish”, “altruistic”, and “aversive to harm”) in order to investigate moral judgement beyond this social dilemma in the Czech population we aim to answer two research questions: Whether the public circumstances (i.e. if the software choice is visible at the first glance) make the personal choice “altruistic” and what type of situation is most problematic for the “altruistic” choice (namely if it is the public one, the personal one, or the one for a person’s offspring).
We devised a web-based study running between May and December of 2017 and gathered 2769 respondents (1799 women, 970 men; age IQR: 25-32). This study was a part of research preregistered at OSF before start of data gathering.
The AV-focused block of the questionnaire was opened by a brief information on AV and three proposed program solutions for previously introduced “trolley problem like” collisions: “selfish” (with preference for passengers in the car), “altruistic” (with preference for the highest number of saved lives), and “aversion to harm” (which will not actively change direction leading to killing a pedestrian or a passenger, even though it would save more lives in total). Participants were asked the following four questions: 1. What type of software would you choose for your own car if nobody was able to find out about your choice (“secret/self”). 2. What type of software would you choose for your own car if your choice was visible at the first glance (“public/self”). 3. What type of software would you choose for the car of your beloved child if nobody was able to find out (“child”). 4. What type of software would you vote for in secret in the parliament if it was to become the only legal type of AV (“parliament”).
The results are as follows, test of independence was performed by a chi square: “Secret/self”: “selfish” (45.2 %), “altruistic” (45.2 %), “aversion to harm” (9.6 %). “public/self: “selfish” (30 %), “altruistic” (58.1 %), “aversion to harm” (11.8 %). In public choice, people were less likely to choose selfish software for their own car.
“Child”: “selfish” (66.6 %), “altruistic” (27.9 %), “aversion to harm” (5.6 %). A vote in parliament for legalizing single type: “selfish” (20.6 %), “altruistic” (66.9 %), “aversion to harm” (12.5 %) In choice of car for one’s own child people were more likely to choose selfish software than in the choice for themselves.
Based on the results, we can conclude that the public choice is more likely to pressure consumers to accept the altruistic solution making it a reasonable and relatively cheap way to shift them towards higher morality. In less favourable news, the general public tends to heightened sensibility and selfishness in case of one’s own offspring, and a careful approach is needed to prevent moral panic.
15:45
Naira Danielyan (National Research University of Electronic Technology, Russia)
Prospect of NBICS Development and Application
ABSTRACT. The report considers the basic principles of the philosophical approach to NBICS-convergence. Being a method of getting a fundamental knowledge, NBICS-technologies turn into an independent force influencing nature, society and man. One of the basic ideas of nanotechnology concept is an opportunity to consider a man as a constructor of the real world, e.g., by means of constructing human perception due to nanochips, programming the virtual reality in human brain. It might lead to some new forms of consciousness and emergence of the modified objective reality. Developing and introducing nanotechnologies brings up new scientific issues being closely connected with the realization of possible projects such as, for instance, complete description of thinking processes and perception of the reality by human brain, slowdown of aging processes, opportunity of human organism rejuvenation, development of brain/brain or brain/computer interfaces, creation of robots and other devices possessing at least partial individuality, etc. Penetrating technologies into human perception inevitably results in the hybrid reality, which eliminates any borders between man’s virtual personality and his physical embodiment. Space ideas of physical limits of communication and identification also change due to the fact that human presence in the communication medium is cognized as virtual and real simultaneously. It turns out to be an absolutely new phenomenon of human existence having in many ways the constructivism principles in its foundation.
The active role of cognition is the most important aspect of the paradigm analyzed in the report as the methodology of this new type of technologies. Such an opportunity opens unlimited perspectives for individual and collective creative work. The author examines the dialogue between man and nature by means of the technologies. He demonstrates that they are directed to the decision of scientific issues, mostly having a constructive nature under the influence of virtualization of human consciousness and social relations. The report illustrates on the example of the ‘instrumental rationality’ paradigm that as NBICS-technologies include the Internet, they can’t be used in vacuum. They are interconnected and imply a number of political, economical and social aspects which accompany them. As a result, they’re becoming a characteristic of the public style of thinking. The emphasis is made on socio-cultural prospects of the new kind of technologies and their constructivism nature. Any cognition process turns into a social act as some norms and standards, which are not related to a significant originator, but being recognized by all the society involved in the process, appear among the representatives of different knowledge spheres during the communication process. From the scientific point of view, the consequences of NBICS application are both the unprecedented progress in medicine, molecular biology, genetics, proteomics and the newest achievements in electronics, robotics and software. They will provide a chance to create artificial intelligence, to prolong the life expectancy unprecedentedly, to create new public forms, social and psychical processes. At the same time man doesn’t stop to be rational under the influence of technologies. His cognition process is accompanied by creative and constructive human activity leading to the effects that can reveal themselves, for instance, in the modification of human sensitivity level by means of significant transformation of physical capabilities. In turn, it should lead to nonreversible consequences, because man himself, his body and consciousness turn into an integral part of complex eco-, socio-cultural and socio-technical systems. That’s why the philosophical reflection of ecological, social and cultural results of NBICS-technologies introduction and application is becoming more and more topical. The report concludes that NBICS overcomes all previous technological achievements according to its potential and socio-cultural effects.
Theoretical Virtues in Eighteenth-Century Debates on Animal Cognition
ABSTRACT. This paper discusses the role of the theoretical virtues (i) unification, (ii) simplicity, and (iii) scientific understanding in eighteenth-century debates on animal cognition. It describes the role that these virtues play in the construction of different theories of animal cognition and aims to establish the relative weight that these virtues were assigned. We construct a hierarchy of theoretical virtues for the biologists and philosophers Georg-Louis Leclerc Buffon (1707-1788), Hermann Samuel Reimarus (1694-1768), and Charles-Georges LeRoy (1723-1789) and Etienne Bonnot de Condillac (1714-1780). Through discussing these virtues and the importance assigned to these virtues by different authors, we can determine how different theoretical virtues shaped and determined the theories articulated by Buffon, Reimarus, Le Roy and Condillac.
Theoretical virtues such as unification, simplicity and scientific understanding have received a lot of attention in the philosophical literature. An important question is how these different theoretical virtues relate and how they are supposed to be ranked. We can imagine questions such as the following: confronted between a choice between a simple theory and a theory that yields unified explanations, do we, other things being equal, prefer simple theories over theories that yield unified explanations? Or do we prefer theories that yield scientific understanding over simple theories? To answer these types of questions requires making a hierarchy of theoretical virtues. In this paper, I do not have the systematic aim of constructing such a hierarchy. Rather, I have the historical aim of showing that eighteenth-century debates on animal cognition can be profitably understood if we analyze the relative weight that different authors assigned to different theoretical virtues.
I will show that within eighteenth-century debates on animal cognition we can distinguish three core positions: (a) Buffon’s mechanism, (b) Reimarus’ theory of instinct, and (c) Le Roy’s and Condillac’s sensationalist position which assigns intelligence to animals. I show that these positions are partly shaped by the theoretical virtues that these authors adopted. Thus, Buffon’s mechanism is shaped by his acceptance of unification as a theoretical virtue, Reimarus’ theory of instinct is shaped by his adoption of a particular virtue of simplicity, whereas Le Roy’s and Condillac’s sensationalist position is shaped by their acceptance of the theoretical virtue of scientific understanding.
I will further argue, however, that the way in which Buffon, Reimarus, Le Roy and Condillac understand different theoretical virtues is also shaped by their theoretical commitments. Thus, for example, Buffon’s mechanism influences the way he conceives of unification. Although the appeal to different theoretical virtues thus partly explains the theoretical position articulated by Buffon, Reimarus, Le Roy and Condillac, the converse is also true: their theoretical positions shaped the way in which they conceived of different theoretical virtues. Finally, I show that the different theories on animal cognition could often appeal to the same theoretical virtues for support. This means that the theoretical virtues are sometimes incapable of necessitating a choice between different theories.
15:45
Martin Wasmer (Leibniz University Hannover, Germany)
Bridging between biology and law: European GMO law as a case for applied philosophy of science
ABSTRACT. Laws regulating the permissibility of producing and releasing genetically modified organisms (GMOs) into the environment address a multitude of normatively loaded issues and frequently lead to heated public debate. Drafting new legislature as well as interpreting and operationalizing current GMO law draws on knowledge from both (applied) biology and the study of law.
The European Directive 2001/18/EC regulates the deliberate release of GMOs, such as genetically modified crops in agriculture. Its legal definition of GMO depends on the interpretation of the vaguely formulated phrase “altered in a way that does not occur naturally” (Bobek, 2018). However, this phrase decides which organisms do or do not fall under the regulatory obligations of European GMO law, with far reaching implications for what is planted on our fields and served on our plates.
I provide a framework for interpreting the European GMO definition on an outcome-based approach, by identifying two main issues that challenge its straightforward application to organisms bread by new breeding techniques:
(1) First, three contradicting concepts of naturalness can be distinguished (following Siipi, 2008; Siipi & Ahteensuu, 2016) and the decision between those is necessarily based on values.
(2) Second, a theory of biological modalities is required for the operationalization of natural possibilities (following Huber, 2017).
Once these conceptual issues are solved the GMO definition can be operationalized for regulatory practice.
This case study on the GMO Definition in European law shows how history and philosophy of science can contribute to bridging across the disciplines: Note that legal methods alone do not suffice to interpret the GMO definition in the context of new technologies, because there are no legal precedents and no comparable instances in the legal body in the case of (radically) new scientific developments. For this reason, lawyers call on experts from biology and biotechnology to draw on scientific ontologies, emphasizing the importance of science for policymaking (cf. Douglas, 2009). On the other hand, also methods from biology alone do not suffice to operationalize the GMO definition, since ontological choices do not only depend on empirical evidence but also on value judgments (Ludwig 2014, 2016). Instead, HPS is the go-to-discipline for the clarification of conceptual issues in multidisciplinary normative contexts.
References:
Bobek, M. (2018). Reference for a preliminary ruling in the Case C-528/16. No. ECLI:EU:C:2018:20
Douglas, H. (2009). Science, policy, and the value-free ideal. University of Pittsburgh Press.
Huber, M. (2017). Biological modalities (PhD Thesis). University of Geneva.
Kahrmann, J. et al. (2017). Aged GMO Legislation Meets New Genome Editing Techniques. Zeitschrift für Europäisches Umwelt- und Planungsrecht, 15(2), 176–182.
Ludwig, D. (2014). Disagreement in Scientific Ontologies. Journal for General Philosophy of Science, 45(1), 119–131.
Ludwig, D. (2016). Ontological Choices and the Value-Free Ideal. Erkenntnis, 81(6), 1253–1272.
Siipi, H. (2008). Dimensions of Naturalness. Ethics & the Environment, 13(1), 71–103.
Siipi, H., & Ahteensuu, M. (2016). Moral Relevance of Range and Naturalness in Assisted Migration. Environmental Values, 25(4), 465–483.
Lukáš Bielik (Comenius University in Bratislava, Slovakia)
Abductive Inference and Selection Principles
ABSTRACT. Abductive inference appears in various contexts of cognitive processes. Usually, the two prominent uses of abduction with respect to the explanatory hypotheses in question are distinguished: a) the context of discovery (or hypothesis-generation/formulation); and, b) the context of justification (or evidential support). Notwithstanding the other uses of abduction (e.g. computational tasks), I pay close attention to an overlooked context of abductive inference: c) the context of explanatory selection.
I propose to distinguish these three kinds of explanatory inferences explicitly by using a notion of a selection principle. The selection principle is optimally construed (or modelled) as a partial function defined on a (non-empty) set of (explanatory) hypotheses with respect to an explicit piece of evidence E and background B comprising doxastic, epistemic and axiological items. If a given selection principle operates on an admissible n-tuple of arguments, it yields at most one explanatory hypothesis (or its content-part) as a function-value. Having the notion of selection principle at our disposal, it is possible to make the difference among those three contexts of the use of abduction completely explicit. In particular, I argue for distinguishing the three kinds of selection principles operating in these contexts. These kinds of principles differ both with respect to the arguments they operate on, and to the function-values they yield.
Moreover, I provide explicit reasons for identifying inference to the best explanation (henceforth “IBE”) only with abductive inference in a justificatory context of reasoning. As a consequence, I show that, at least, some widely-discussed objections against IBE in the literature (such as van Fraassen’s (1989) argument from a bad lot) are not relevant to other forms of abductive inference in the context of discovery or the context of explanatory selection. Hence, such a clarification of different selection principles underlying different contexts of abduction appears to be fruitful for re-considering the question of which traditional objections against IBE are also objections against abductive inference in general.
References
Aliseda, A. (2006): Abductive Reasoning. Springer.
Douven, I. (2002): Testing Inference to the Best Explanation. Synthese 130, No. 3, 355-377.
Douven, I. (2011): Abduction. In: Zalta, E. N. (ed.): The Stanford Encyclopedia of Philosophy. Available at: https://plato.stanford.edu/entries/abduction.
Harman, G. (1965): Inference to the Best Explanation. Philosophical Review 74, No. 1, 88-95.
Josephson, J. & S. Josephson (eds.) (1996): Abductive Inference. Computation, Philosophy, Technology.
Lipton, P. (2004): Inference to the Best Explanation. London: Routledge.
McCain, K. & T. Poston (eds.) (2017): Best Explanations. New Essays on Inference to the Best Explanation. Oxford: Oxford University Press.
Niiniluoto, I. (1999): Defending Abduction. Philosophy of Science 66, S436-S451.
Okasha, S. (2000): Van Fraassen’s Critique of Inference to the Best Explanation. Studies in History and Philosophy of Science 31, 691-710.
Poston, T. (2014): Reason and Explanation. Basingstoke: Palgrave Macmillan.
Psillos, S. (2004): Inference to the Best Explanation and Bayesianism. In: F. Stadler (ed.): Induction and Deduction in the Sciences. Dordrecht: Kluwer, 83-91.
Schurz, G. (2008): Patterns of Abduction. Synthese 164, 201-234.
van Fraassen, B. (1989): Laws and Symmetry. Oxford: Oxford University Press.
In this conference I am going to argue that, although we do conceptual analysis when we do philosophy, the world plays a central role in the constitution of our philosophical concepts, statements and theories. Particularly, their meaning is partially determined by the way the world is. I will object to what the advocates of conceptual analysis, specifically the members of the Canberra Plan (Jackson and Chalmers (2001); Jackson, F., (1998)) –who, arguably, have advanced the most influential antinaturalistic metaphilosophical view in our days– consider to be the conceptual elements of philosophical practice: the two steps of philosophical analysis and the deductive implication of the folk vocabulary by the scientific one. I will advance three main problems for the purely conceptual and aprioristic status of these components:
(P1) Science also does conceptual analysis (Jackson, 1998; Larkin, McDermott, & Simon, 1980; Tallant, 2013).
(P2) Philosophy also depends on the world, which is known by us through observation and experimentation (specifically, our implicit folk theories depend on the world (Arthur, 1993; Chassy & Gobet, 2009; Goldman, 2010).
(P3) The deduction of the folk and philosophical vocabulary from the vocabulary of the sciences presupposes factual and a posteriori elements (Williamson, 2013).
The main conclusion is that even if we agree that philosophy does conceptual analysis, empirical evidence has shown us that philosophy still depends on the way the world is. So, conceptual analysis doesn’t differentiate philosophy –neither in method, nor in subject matter– from scientific practice in the way that the conceptual analysts wanted it to. The world partially determines, a posteriori, the nature of the two-step methodology of conceptual analysis. Therefore, the possible identification of philosophy with conceptual analysis cannot establish a difference in kind between philosophy and science, be it semantic or epistemic. This leaves us with the problem of explaining why these activities seem so different. I think that this question can be seen as a matter of degree, but this will be the subject for another conference.
References
Arthur, S. R. (1993). Implicit Knowledge and Tacit Knowledge. New York and Oxford: Oxford University Press.
Chalmers, D., & Jackson, F. (2001). Conceptual Analysis and Reductive Explanation. Philosophical Review, 153-226.
Chassy, P., & Gobet, F. (2009). Expertise and Intuition: A Tale of Three Theories. Minds & Machines, 19, 151-180.
Goldman, A. (2010). Philosophical Naturalism and Intuitional Methodology. Proceedings and Addresses of the American Philosophical Association, 84, 115-150.
Jackson, F. (1998). From Metaphysics to Ethics. Oxford: Clarendon Press.
Larkin, J. H., McDermott, J. S., & Simon, H. A. (1980). Expert and novice performance in solving physics problems. Science, 208, 1335-1342.
Tallant, J. (2013). Intuitions in physics. Synthese, 190, 2959-2980.
Williamson, T. (2013). How Deep is the Distinction betwee A Priori and A Posteriori Knowledge? In A. C. Thurow (Ed.), The A Priori In Philosophy (pp. 291-312). Oxford: Oxford University Press.
Using norms to justify theories within definitions of scientific concepts
ABSTRACT. This paper is about scientific concepts that are often thought to correspond to categories in nature. These range from widely known concepts such as SPECIES, to more specialized concepts like 2,4-DIHYDROXYBUTYRIC ACID METABOLIC PATHWAY, thought to correspond to a category to which certain synthetic metabolic pathways belong.
A typical definition of such a concept summarizes or otherwise suggests a theory about the conditions that constitute belong to the corresponding category. So these are theories that make constitution claims.
For several decades most philosophical discussions about such concepts have been metaphysical. This paper instead helps defend an epistemic thesis:
Normative conventionalism: If an agent is justified in endorsing the constitution claims from a definition of the concept C, then this stems at least in part from the constitution claims being in accord with norms about how agents ought to categorize things, and this contribution to justification is independent of any degree of correspondence between the constitution claims and supposed modal facts. (cf. Thomasson 2013)
To allow for detailed (rather than complete) defense, the paper restricts its focus to one concept, the persistent BIOLOGICAL SPECIES CONCEPT (BSC). The paper first uncovers how the BSC's typical definition (e.g., Mayr 2000) is more profoundly ambiguous than others have noted. From the definition, one can infer several extensionally non-equivalent and complex sets of constitution claims. Next the paper interprets the practices of relevant species biologists (e.g., Coyne and Orr 2004) as implicitly appealing to what have been called classificatory norms (Slater 2017) when selecting between competing BSC constitution claims. Finally, the paper argues this is wise because modal facts cannot alone tell biologists which constitution claims to endorse, and classificatory norms should help take up that slack. The conventionalism thus supported is interesting because it differs from others. It is about how to specify constitution claims for a given concept, rather than about selecting between multiple concepts (Dupré 1993; Kitcher 2001) or about when constitutive conditions are satisfied (Barker and Velasco 2013).
Barker, Matthew, and Joel Velasco. 2013. “Deep Conventionalism about Evolutionary Groups.” Philosophy of Science 80:971–82.
Coyne, Jerry, and H. A. Orr. 2004. Speciation. Sunderland, MA: Sinauer.
Dupré, John. 1993. The Disorder of Things: Metaphysical Foundations of the Disunity of Science. Cambridge, MA: Harvard University Press.
Kitcher, Philip. 2001. Science, Truth, and Democracy. Oxford University Press.
Mayr, Ernst. 2000. “The Biological Species Concept.” In Species Concepts and Phylogenetic Theory: A Debate, edited by Quentin Wheeler and Rudolf Meier, 17–29. New York: Columbia University Press.
Millstein, Roberta L. 2010. “The Concepts of Population and Metapopulation in Evolutionary Biology and Ecology.” In Evolution Since Darwin: The First 150 Years, edited by M. Bell, D. Futuyma, W. Eanes, and J. Levinton. Sunderland, MA: Sinauer.
Slater, Matthew H. 2017. “Pluto and the Platypus: An Odd Ball and an Odd Duck - On Classificatory Norms.” Studies in History and Philosophy of Science 61:1–10.
Thomasson, Amie. 2013. “Norms and Necessity.” The Southern Journal of Philosophy 51:143–60.
Guillermo Badia (The University of Queensland, Australia) Carles Noguera (Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Czechia)
A generalized omitting type theorem in mathematical fuzzy logic
ABSTRACT. Mathematical fuzzy logic (MFL) studies graded logics as particular kinds of many-valued inference systems in several formalisms, including first-order predicate languages. Models of such first-order graded logics are variations of classical structures in which predicates are evaluated over wide classes of algebras of truth degrees, beyond the classical two-valued Boolean algebra. Such models are relevant for recent computer science developments in which they are studied as weighted structures.
The study of such models is based on the corresponding strong completeness theorems [CN,HN] and has already addressed several crucial topics such as: characterization of completeness properties w.r.t. models based on particular classes of algebras [CEGGMN], models of logics with evaluated syntax [NPM,MN], study of mappings and diagrams [D1], ultraproduct constructions [D2], characterization of elementary equivalence in terms of elementary mappings [DE], characterization of elementary classes as those closed under elementary equivalence and ultraproducts [DE3], Löwenheim-Skolem theorems [DGN1], and back-and-forth systems for elementary equivalence [DGN2]. A related stream of research is that of continuous model theory [CK,C].
Another important item in the classical agenda is that of omitting types, that is, the construction of models (of a given theory) where certain properties of elements are never satisfied. In continuous model theory the construction of models omitting many types is well known [YBWU], but in MFL has only been addressed in particular settings [CD,MN]. The goal of the talk is establish a new omitting types theorem, generalizing the previous results to the wider notion of tableaux (pairs of sets of formulas, which codify the properties that are meant to be preserved and those that will be falsified).
References:
[YBWU] I. Ben Yaacov, A. Berenstein, C. Ward Henson, and A. Usvyatsov. Model theory for metric structures, (2007), URL:https://faculty.math.illinois.edu/~henson/cfo/mtfms.pdf
[C] X. Caicedo. Maximality of continuous logic, in Beyond first order model theory, Chapman & Hall/CRC Monographs and Research Notes in Mathematics (2017).
[CK] C.C. Chang and H. J. Keisler. Continuous Model Theory, Annals of Mathematical Studies, vol. 58, Princeton University Press, Princeton (1966).
[CD] P. Cintula and D. Diaconescu. Omitting Types Theorem for Fuzzy Logics. To appear in IEEE Transactions on Fuzzy Systems.
[CEGGMN] P. Cintula, F. Esteva, J. Gispert, L. Godo, F. Montagna, and C. Noguera. Distinguished Algebraic Semantics For T-Norm Based Fuzzy Logics: Methods and Algebraic Equivalencies, Annals of Pure and Applied Logic 160(1):53-81 (2009).
[CN] P. Cintula and C. Noguera. A Henkin-style proof of completeness for first-order algebraizable logics. Journal of Symbolic Logic 80:341-358 (2015).
[D1] P. Dellunde. Preserving mappings in fuzzy predicate logics. Journal of Logic and Computation 22(6):1367-1389 (2011).
[D2] P. Dellunde. Revisiting ultraproducts in fuzzy predicate logics, Journal of Multiple-Valued Logic and Soft Computing 19(1):95-108 (2012).
[D3] P. Dellunde. Applicactions of ultraproducts: from compactness to fuzzy elementary classes. Logic Journal of the IGPL 22(1):166-180 (2014).
[DE] P. Dellunde and Francesc Esteva. On elementary equivalence in fuzzy predicate logics. Archive for Mathematical Logic 52:1-17 (2013).
[DGN1] P. Dellunde, À. García-Cerdaña, and C. Noguera. Löwenheim-Skolem theorems for non-classical first-order algebraizable logics. Logic Journal of the IGPL 24(3):321-345 (2016).
[DGN2] P. Dellunde, À. García-Cerdaña, and C. Noguera. Back-and-forth systems for first-order fuzzy logics. Fuzzy Sets and Systems 345:83-98 (2018).
[HN] P. Hájek and P. Cintula. On theories and models in fuzzy predicate logics. Journal of Symbolic Logic 71(3):863-880 (2006).
[MN] P. Murinová and V. Novák. Omitting Types in Fuzzy Logic with Evaluated Syntax, Mathematical Logic Quarterly 52 (3): 259-268 (2006).
[NPM] V. Novák, I. Perfilieva, and J. Močkoř. Mathematical Principles of Fuzzy Logic, Kluwer Dordrecht (2000).
15:45
Inessa Pavlyuk (Novosibirsk State Pedagogical University, Russia) Sergey Sudoplatov (Sobolev Institute of Mathematics, Novosibirsk State Technical University, Novosibirsk State University, Russia)
On ranks for families of theories of abelian groups
ABSTRACT. We continue to study families of theories of abelian
groups \cite{PS18} characterizing $e$-minimal subfamilies
\cite{rsPS18} by Szmielew invariants $\alpha_{p,n}$, $\beta_p$,
$\gamma_p$, $\varepsilon$ \cite{ErPa, EkFi}, where $p\in P$, $P$
is the set of all prime numbers, $n\in\omega\setminus\{0\}$, as
well as describing possibilities for the rank ${\rm RS}$
\cite{rsPS18}.
We denote by $\mathcal{T}_A$ the family of all theories of abelian
groups.
\begin{theorem}\label{th1_PS} An infinite family $\mathcal{T}\subseteq\mathcal{T}_A$
is $e$-minimal if and only if for any upper bound $\xi\geq m$ or
lower bound $\xi\leq m$, for $m\in\omega$, of a Szmielew invariant
$$\xi\in\{\alpha_{p,n}\mid p\in
P,n\in\omega\setminus\{0\}\}\cup\{\beta_p\mid p\in
P\}\cup\{\gamma_p\mid p\in P\},$$ there are finitely many theories
in $\mathcal{T}$ satisfying this bound. Having finitely many
theories with $\xi\geq m$, there are infinitely many theories in
$\mathcal{T}$ with a fixed value $\alpha_{p,s}
\begin{theorem}\label{th2_PS} For any theory $T$ of an abelian group $A$
the following conditions are equivalent:
$(1)$ $T$ is approximated by some family of theories;
$(2)$ $T$ is approximated by some $e$-minimal family;
$(3)$ $A$ is infinite.
\end{theorem}
Let $\mathcal{T}$ be a family of first-order complete theories in
a language $\Sigma$. For a set $\Phi$ of $\Sigma$-sentences we put
$\mathcal{T}_\Phi=\{T\in\mathcal{T}\mid T\models\Phi\}$. A family
of the form $\mathcal{T}_\Phi$ is called {\em $d$-definable} (in
$\mathcal{T}$). If $\Phi$ is a singleton $\{\varphi\}$ then
$\mathcal{T}_\varphi=\mathcal{T}_\Phi$ is called {\em
$s$-definable}.
\begin{theorem}\label{th3_PS} Let $\alpha$ be a countable ordinal,
$n\in\omega\setminus\{0\}$. Then there is a $d$-definable
subfamily $(\mathcal{T}_A)_\Phi$ such that ${\rm
RS}((\mathcal{T}_A)_\Phi)=\alpha$ and ${\rm
ds}((\mathcal{T}_A)_\Phi)=n$.
\end{theorem}
This research was partially supported by Committee of Science in
Education and Science Ministry of the Republic of Kazakhstan
(Grant No. AP05132546) and Russian Foundation for Basic Researches
(Project No. 17-01-00531-a).
\begin{thebibliography}{10}
\bibitem{PS18} {\scshape In.I.~Pavlyuk, S.V.~Sudoplatov}, {\itshape Families of theories of abelian groups
and their closures}, {\bfseries\itshape Bulletin of Karaganda
University. Series ``Mathematics''}, vol.~90 (2018).
\bibitem{rsPS18} {\scshape S.V.~Sudoplatov}, {\itshape On ranks for families of theories
and their spectra}, {\bfseries\itshape International Conference
``Mal'tsev Meeting'', November 19--22, 2018, Collection of
Abstracts}, Novosibirsk: Sobolev Institute of Mathematics,
Novosibirsk State University, 2018, p.~216.
\bibitem{ErPa} {\scshape Yu.L.~Ershov, E.A.~Palyutin}, {\bfseries\itshape Mathematical
logic}, FIZMATLIT, Moscow, 2011.
\bibitem{EkFi} {\scshape P.C.~Eklof, E.R.~Fischer}, {\itshape The
elementary theory of abelian groups}, {\bfseries\itshape Annals of
Mathematical Logic}, vol.~4 (1972), pp.~115--171.
How Should We Make Intelligible the Coexistence of the Different Logics? -An Attempt Based on a Modal Semantic Point of View
ABSTRACT. Recently, logicians and computer scientists increasingly tend to treat those different logics such as the classical, the intermediate, the intuitionistic and the still weaker logics (notably the linear logics) as the equally justifiable, legitimate objects of the study. It is too obvious that those research trends are to be welcomed in view of the amount of the theoretically fruitful results they bring in. We should admit, however, that we are still quite in the dark about how to make philosophically intelligible and justify the coexistence of those different logics, in particular, of the two representative logics, i.e. the classical logic (henceforth CL) and the intuitionistic logic (henceforth IL).
With good reasons, logicians and computer scientists usually prefer to avoid being involved in philosophical debates and are prone to take pragmatic attitudes to the problem. What about philosophers, then? They seem to be rather bigoted. At the one extreme, the ordinary analytic philosophers vehemently stick to CL (and the modal extensions thereof) and refuse to take the IL and the still weaker logics into serious philosophical consideration. At the other extreme, those few radical philosophers such as Michael Dummett baldly claim that CL should be abandoned on account of the unintelligibility of its fundamental semantic principle, i.e. the principle of bivalence, and that instead IL should be adopted as the uniquely justifiable genuine logic.
On one hand, I agree with Dummett that IL has at least one prominent virtue that CL definitely lacks: the constructive character of its inference principles, which, one might say, makes it theoretically more transparent and philosophically more coherent than CL, whose characteristic inference principles, i.e. the classical reductio, makes it irremediably non-constructive. On the other hand, however, it is too evident that CL plays a pivotal role in the development of the ordinary classical mathematics, in particular of its theoretical foundations, i.e. the set theory, and that those theoretical contributions of the classical logic should be, rather than just being neglected or denounced to be unintelligible, squarely made sense of and illuminatingly accounted for.
I propose to start by setting up a certain common “platform” on which to locate the different logics and to determine their respective properties and mutual relationships precisely. It is well-known that the translation introduced by Gödel in his [G1933] (henceforth G-translation) from the language of IL to that of the logic S4, a modal extension of CL, is sound and faithful: An IL formula φ is provable in IL if and only if its G-translation is provable in S4. Roughly speaking, one may understand the situation thus: IL has (something very akin to) its isomorphic image in (some sublanguage of) S4. And in this sense S4 is called “the modal companion” of IL. G-translation also yields a modal companion for each of the super-intuitionistic logics (i.e. the stronger logics than IL). For example, it assigns CL with the modal logic S5 as its modal companion. Moreover, various weaker logics can be assigned with their respective modal companion by the translation (or its slight variant). Thus, for the moment we can conclude that we have a “platform” (in the above sense of the word) for the different logics just in the world of the (classical) modal logics.
Note that this is not the end of the matter but the beginning. Why can the modal languages play such a prominent role? Undoubtedly the key to the answer lies in the notion of the modality (in particular that of necessity), but the mere notion of necessity is rather void of content and seem to provide hardly any sufficient explanation. At this point, the Kripke semantics (the relational structural semantics) of the modal languages is helpful in that it accounts for the modal notions by the non-modal relational terms. But then it becomes crucial how to conceive of the two key notions of the semantics: the notion of the possible state and that of the accessibility relation. I am going to propose a new account of these notions that is based on a proof theoretical viewpoint. .
15:45
Diego Fernandes (Universidade Federal de Goiás, Brasil., Brazil)
On the elucidation of the concept of relative expressive power among logics
ABSTRACT. The concept of expressive power, or strength, is very relevant and frequent in comparisons of logics. Despite its ubiquity, it still lacks a conceptual elucidation and this is manifested in two ways. On the one hand, it is not uncommon to see the notion of expressiveness employed in comparisons of logics on imprecise and varying grounds. This creates confusion in the literature and hardens the process of building upon other's results. On the other hand, when care is taken to specify that the formal criterion of expressiveness being used is a certain E, there is generally no further comment on it, e.g. intuitive motivations or why E was chosen in the first place (E.g. in [KW99], [Koo07], [AFFM11] and [Kui14]). This gives the impression that the concept of expressiveness has been thoroughly elucidated, and it's clearest and best formal counterpart is E. This is also misleading, since there are prima facie plausible but conflicting formal ways to compare expressiveness of logics. This work is intended to tackle these issues.
Formal comparisons of expressiveness between logics can be traced back to [Lin69], where a certain formal criterion for expressiveness (to be referred as EC) is given. No conceptual discussion or motivations are offered for EC, perhaps because it issues directly from Lindström's concept of logical system (a collection of elementary classes). In [Ebb85] there is a very brief discussion in which a pair of intuitions for expressiveness is given, and it is argued that one would be captured by EC, and another by a new criterion EQ. Shapiro questions the adequacy of EC in [Sha91] due to its strictness and gives two broader criteria (PC and RPC). One motivation for the latter is that, as opposed to EC, they allow the introduction of new non-logical symbols in expressiveness comparisons. For example, in some logics the concept of infinitely many is embedded in a logical constant whereas in others, it must be "constructed" with the help of non-logical symbols. Thus, PC and RPC consider also the latent expressive power of a logic, so to speak.
Up to now, four formal criteria of expressiveness were mentioned. When comparing logics, all of EC, PC and RPC can be seen as mapping formulas in the source logic, to formulas in the target logic, with respective restrictions on the allowed mappings. This might be seen as too restrictive, as there are cases where a concept can be expressed in a logic but only using a (possibly infinite) set of formulas (e.g. the concept of infinity in first-order logic). If we allow that formulas in one logic to be mapped to a (possibly infinite) set of formulas in the target logic, we get three new criteria for expressive power: EC-D, PC-D and RPC-D.
Thus we have at least seven formal criteria for expressiveness, but in order to be able to choose between them, we need to select some intuitions for what it can mean for a logic to be more expressive than another. It will be seen that the seven criteria can be divided into two groups capturing each some basic intuition as regards expressiveness. In order to clarify what we mean by "the logic L' is more expressive than the logic L", firstly we have to select some basic intuitions regarding expressive power, and then choose among the rival formal criteria intended to capture them. In order to do this, some adequacy criteria will be proposed, and the material adequacy of the formal criteria will be assessed.
References
[AFFM11] Carlos Areces, Diego Figueira, Santiago Figueira, and Sergio Mera. The expressive power of memory logics. The Review of Symbolic Logic, 4(2):290-318, 2011.
[Ebb85] H.D. Ebbinghaus. Extended logics: The general framework. In Jon Barwise and Solomon Feferman, editors, Model-theoretic logics, Perspectives in mathematical logic. Springer-Verlag, 1985.
[Koo07] Barteld Kooi. Expressivity and completeness for public update logics via reduction axioms. Journal of Applied Non-Classical Logics, 17(2):231-253, 2007.
[Kui14] Louwe B. Kuijer. The expressivity of factual change in dynamic epistemic logic. The Review of Symbolic Logic, 7(2):208-221, 2014.
[KW99] Marcus Kracht and Frank Wolter. Normal monomodal logics can simulate all others. Journal of Symbolic Logic, 64(1):99-138, 1999.
[Lin69] P. Lindström. On extensions of elementary logic. Theoria, 35(1):1-11, 1969.
[Sha91] S. Shapiro. Foundations without Foundationalism : A Case for Second-Order Logic: A Case for Second-Order Logic. Oxford Logic Guides. Clarendon Press, 1991
Sara Ipakchi (Departement of Philosophy at the Heinrich Heine University, Germany)
Even logical truths are falsifiable.
ABSTRACT. A special group of sentences, namely logical true sentences like the Law of Excluded Middle or the Law of non-Contradiction, are interesting for most philosophers because of -- among other things -- their infallibility. Moreover, it seems that their truth value is so obvious that it is not necessary to justify them. These properties lead some philosophers to use them as trustworthy sources to construct philosophical theories or even as direct justifications of philosophical theories. But are they really infallible or are they really self-evident?
In this paper, I want to answer both of these questions with no. For the infallibility-part, I will argue that just in the case that a sentence is analytic, necessary or a priori, it makes sense to speak about its infallibility. In other words, if a sentence is neither analytic, nor necessary, nor a priori, then it is not infallible. With some examples, I will show that a logical true sentence like the Law of Excluded Middle -- as we use it in philosophy -- has none of these properties and therefore is not infallible.
In the second part -- the justifiability-part -- I will argue that there is a direct connection between sentences in need of justification and falsifiable sentences. Since logical truths are neither analytic, nor necessary, nor a priori sentences and therefore not falsifiable, they are not exempt from justifications either. In other words, their truth value is not always assessable, is context dependent, and often cannot be determined by rational and/or transcendental methods alone. Thus, logical truths need justification.
References
[1] Baehr, S. Jason, A Priori and a Posteriori, Internet Encyclopedia of Philosophy, 2003
[2] Boghossian, Paul and Peacocke, Christopher, New essays on the A Priori, Oxford University Press, 2000
[3] Casullo, Albert, A Priori knowledge, Aldershot, Brookfield, Vt. : Ashgate, 1993
[4] Popper, Karl, Logik der Forschung, Springer-Verlag Wien GmbH, 1935
[5] Rey, Georges, The Analytic/Synthetic Distinction, The Stanford Encyclopedia of Philosophy, 2018
[6] Russell, Bruce, A Priori Justification and Knowledge, The Stanford Encyclopedia of Philosophy, 2017
15:45
Hussien Elzohary (Head of Academic Studies & Events Section, Manuscripts Center, Academic Research Sector, Bibliotheca Alexandrina, Egypt)
The Influence Of The Late School Of Alexandria On The Origin And Development Of Logic In The Muslim World
ABSTRACT. In order to promote the discussion surrounding the origins and background of Arabic Logic, we have to explore the Greek Logical traditions and the Logical introductions to Aristotle’s works compiled at the end of the Roman Empire. The study demonstrates that the view of Logic adopted by many Greek thinkers and transmitted through translations of the commentaries on Aristotle into Arabic had a great impact on the genesis of Islamic theology and philosophy. A number of late philosophers are explored with a view to demonstrate this point.
By about 900, almost all of Aristotle’s logical works had been translated into Arabic, and was subject to intensive study. We have texts from that time, which come in particular from the Baghdad school of Philosophy. The school’s most famous logician was Al-Farabi (d. 950) who wrote a number of introductory treatises on Logic as well as commentaries on the books of the Organon.
The research aims at studying the influence of the late Alexandrian School of philosophy in the 6th century AD, in the appearance and development of Greek Logic in the Muslim World. In addition, the adaptation of its methodologies by the Islamic thinkers, and its impact on Muslim philosophical thought. The late Alexandrian school has been underestimated by many scholars who regard its production at the end of the Classical Age as mere interpretations of previous writings; delimiting its achievement to the preservation of ancient Logical and philosophical heritage. The research reviews the leading figures of the Alexandrian School and its production of Logical commentaries. It also traces the transmission of its heritage to the Islamic World through direct translations from Greek into Syriac first and then into Arabic. It also highlights the impact of the Alexandrian commentaries on Muslim recognition of Plato and Aristotle as well as its Logical teaching methodology starting with the study of Aristotle’s Categories as an introduction to understand Aristotle’s philosophy.
References
1. Adamson P., Aristotle in the Arabic Commentary Tradition, In: C. Shields, The Oxford Handbook of Aristotle, Oxford University Press, 2012.
2. Blumenthal, H. J. Aristotle and Neoplatonism in Late Antiquity: Interpretations of the 'De anima'. London: Duckworth, 1996.
3. Chase M., The Medieval Posterity Of Simplicius’ Commentary On The Categories “Thomas Aquinas And Al-Fārābī, In: “Medieval Commentaries on Aristotle's Categories”, Edited by: Lloyd A. Newton, Brill, 2008.
4. Gabbay, D. M., Woods, J.: Handbook of the History of Logic, Vol.1, Greek, Indian and Arabic Logic, Elsevier, Holland, 2004.
5. Gleede, B. “Creation Ex Nihilo: A Genuinely Philosophical Insight Derived from Plato and Aristotle? Some Notes on the Treatise on the Harmony between the Two Sage”. Arabic Sciences and Philosophy 22, no. 1 (Mar 2012).
6. Lossl, J., and J. W. Watt, eds. Interpreting the Bible and Aristotle in Late Antiquity: The Alexandrian Commentary Tradition between Rome and Baghdad. Farnham, UK: Ashgate, 2001.
7. Sorabji, R. Aristotle Transformed: The Ancient Commentators and Their Influence. New York: Cornell University Press, 1990.
8. Sorabji, R. The Philosophy of the Commentators 200-600 AD. Vol. 3. Logic and Metaphysics. New York: Cornell University Press, 2005.
9. Watt, J. “The Syriac Aristotle between Alexandria and Baghdad”. Journal for Late Antique Religion and Culture 7 (2013).
10. Wilberding J., The Ancient Commentators on Aristotle, In: J. Warren, F. Sheffield, “The Routledge Companion to Ancient Philosophy”, Routledge, 2013.
ABSTRACT. The notion of explanation in mathematics has received a lot of attention in philosophy. Some philosophers have suggested that accounts of scientific explanation can be successfully applied to mathematics (e.g. Steiner 1978). Others have disagreed, and questioned the extent to which explanation is relevant to the actual practice of mathematicians. In particular, the extent to which mathematicians use the notion of explanatoriness explicitly in their research is a matter of sharp disagreement.
Resnik and Kushner (1987, p.151) claimed that mathematicians “rarely describe themselves as explaining”. But others disagree, claiming that mathematical explanation is widespread, citing individual mathematicians’ views (e.g., Steiner 1978), or discussing detailed cases in which mathematicians explicitly describe themselves or some piece of mathematics as explaining mathematical phenomena (e.g. Hafner & Mancosu 2005). However, this kind of evidence is not sufficient to settle the disagreement. Recently, Zelcer (2013) pointed out that a systematic analysis of standard mathematical text was needed to address this issue, but that such analysis did not exist. In this talk we illustrate the use of corpus linguistics methods (McEnery & Wilson 2001) to perform such an analysis.
We describe the creation of large-scale corpora of written research-level mathematics (obtained from the arXiv e-prints repository), and a mechanism to convert LaTeX source files to a form suitable for use with corpus linguistic software packages. We then report on a study in which we used these corpora to assess the ways in which mathematicians describe their work as explanatory in their research papers. In particular, we analysed the use of ‘explain words’ (explain, explanation, and various related words and expressions) in this large corpus of mathematics research papers. In order to contextualise mathematicians’ use of these words/expressions, we conducted the same analysis on (i) a corpus of research-level physics articles (constructed using the same method) and (ii) representative corpora of modern English. We found that mathematicians do use this family of words, but relatively infrequently. In particular, the use of ‘explain words’ is considerably more prevalent in research-level physics and representative English, than in research-level mathematics. In order to further understand these differences, we then analysed the collocates of ‘explain words’ –words which regularly appear near ‘explain words’– in the two academic corpora. We found differences in the types of explanations discussed by physicists and mathematicians: physicists talk about explaining why disproportionately more often than mathematicians, who more often focus on explaining how. We discuss some possible accounts for these differences.
References
Hafner, J., & Mancosu, P. (2005). The varieties of mathematical explanation. In P. Mancosu et al. (Eds.), Visualization, Explanation and Reasoning Styles in Mathematics (pp. 215–250). Berlin: Springer.
McEnery, T. & Wilson, A. (2001). Corpus linguistics: An introduction (2nd edn). Edinburgh: Edinburgh University Press.
Steiner, M. (1978) Mathematical explanation. Philosophical Studies, 34(2), 135–151.
Resnik, M, & Kushner, D. (1987). Explanation, independence, and realism in mathematics. British Journal for the Philosophy of Science, 38, 141–158.
Zelcer, M. (2013). Against mathematical explanation. Journal for General Philosophy of Science, 44(1), 173-192.
ABSTRACT. In this paper, we examine words relating to mathematical actions and imperatives in mathematical texts, and within proofs. The main hypothesis is that mathematical texts, and proofs especially, contain frequent uses of instructions to the reader, issued by using imperatives and other action-focused linguistic constructions. We take common verbs in mathematics, such as “let”, “suppose”, “denote”, “consider”, “assume”, “solve”, “find”, “prove” etc. and compare their relative frequencies within proofs, in mathematical texts generally, and in spoken and written British and American English, by using a corpus of mathematical papers taken from the ArXiv. Furthermore, we conduct ‘keyword’ analyses to identify those words which disproportionately occur in proofs compared to other parts of mathematics research papers.
Previous analyses of mathematical language, such as those conducted by de Bruijn (1987) and Ganesalingam (2013), have largely been carried out without empirical investigations of actual mathematical texts. As a result, some of the claims they make are at odds with the reality of written mathematics. For example, both authors claim that there is no room for imperatives in rigorous mathematics. Whether this is meant to be a descriptive or normative claim, we demonstrate that analysing the actual writings of mathematicians, particularly written proofs, shows something quite different. Mathematicians use certain imperatives far more frequently than in natural language, and within proofs we find an even higher prevalence of certain verbs.
The implications of this are that mathematical writing and argumentation may be harder to formalise than other linguistic accounts of it suggest. Furthermore, this backs the idea that proofs are not merely sequences of declarative sentences, but instead provide instructions for mathematical activities to be carried out.
References
De Bruijn, N.G. (1987) “The mathematical vernacular, a language for mathematics with typed sets”, in Dybjer, P., et al. (eds.) Proceedings of the Workshop on Programming Logic, Report 37, Programming Methodology Group, University of Göteborg and Chalmers University of Technology.
Ganesalingam, M. (2013) The Language of Mathematics, Lecture Notes in Computer Science Vol 7805, Springer, Berlin.
ABSTRACT. Term-modal logics are first-order modal logics where the operators doubles as first-order predicates: in e.g. an epistemic knowldge operator K_a, the ‘a’ here is not merely an index, but a first-order term. Term-modal syntax considers e.g. “exists.x K_x phi(x)” a well-formed formula. Semantically, the knowing agents are elements in a proper domain of quantification, rather than mere indices as is the case in ordinary epistemic logic. Term-modal logic thus treat agents as first-class citizens, both syntactically and semantically. This has the effect that variants of the Descartes' Cogito may be expressed, e.g. by the valid “(not K_a phi) implies (K_a exists.x K_x phi)”. Likewise, term-modal logics may in a natural way be used to expresses agents' (higher-order) knowledge about their relation R to others in a social network, e.g. by “K_a (R(a,b) and not K_b R(a,b))”.
The above are inherently term-modal expressions. Though e.g. the latter may be emulated using ordinary propositional epistemic logic with operators K_a, K_b and atomic proposition r_{ab}, such solutions are ad hoc as they do not logically link the occurrences of the indices of the operators and the indices of the atom. Inherently term-modal expressions have been considered as early as von Wright's 1951 “An Essay in Modal Logic”, with Hintikka's syntax in (1962, “Knowledge and Belief”) in fact being term-modal. Despite their natural syntax, term-modal logics have received relatively little attention in the literature and no standard implementation exists. Often, combinations of well-known difficulties from first-order modal logic mixed with oddities introduced by the term-modal operators result in tri-valent or non-normal systems, systems with non-general completeness proofs, or systems satisfying oddities like “P(x) and forall.y not P(y)”.
In this paper, we present a simple, well-behaved instance of a term-modal system. The system is bivalent, normal, with close-to-standard axioms (compared to similar first-order modal logics), and allows for a canonical model theorem approach to proving completeness for a variety of different frame classes. The “static” system may be extended dynamically with a suitable adaption of so-called “action models” from dynamic epistemic logic; for the extension, reduction axioms allow showing completeness. Throughout, the system is illustrated by showing its application to social network dynamics, a recent topic of several epistemic and/or dynamic logics.
ABSTRACT. Maximizing expected utility is a central concept within classic game theory. It combines a variety of core aspects of game theory including the agent's beliefs, preferences and their space of available strategies, pure or mixed. These frameworks presuppose quantitative notions in various ways, both on the input and output side.
On the input side, utility maximizing requires a quantitative, probabilistic representation on the agent's beliefs. Moreover, agents' preferences over the various pure outcomes need to be given quantitatively. Finally, the scope of maximizing is sometimes taken to include mixed strategies, requiring a quantitative account of mixed actions.
Also on the output side, expected utilities are interpreted quantitatively, providing interval scaled preference or evaluations of the available actions, again pure or mixed. In this contribution, we want to pursue qualitative, logical counterparts of maximal utility reasoning. These will build on two core components, logical approaches to games and to probabilities.
On the game side, the approach builds on a standard modal logic for n-player matrix games with modalities []_i and ≥_i for all players, denoting their uncertainty over opponent's choices and their preferences over outcomes respectively. This language is expanded with a mild form of conditional belief operators. Given that players are still to decide on their actions, agents cannot reasonably have beliefs about the likelihood of outcome strategy profiles. Rather, agents have conditional beliefs p_i(φ|a), denoting i's beliefs about which outcome states (or formulas) obtain(ed) if she is or was to perform action a.
To see how such languages can express maximal utility reasoning, assume agent i's goals in some game to be completely determined by a formula φ: she receives high utility if the game's outcome satisfies φ and low utility else. In this case, the expected utility of her various moves is directly related to their propensity of making φ true. More concretely, the expected utility of performing some acton a is as least as high as that of b iff p(φ|a) ≥ p(φ|b).
Notably, the current approach is not restricted to cases where utility is determined by a single formula. We will show the formalism expressive enough to represent all utility assignments on finite games with values in the rationals.
Moreover, we will show that the framework can be used to represent pure and mixed strategy Nash equilibria, again over finite games: For all combinations of rational valued utility assignments there is a formula expressing that an underlying game form equipped with the relevant probability assignments is in equilibrium with respect to these utilities.
Lastly, we show the logical framework developed to be well-behaved in that it allows for a finite axiomatization and is logically compact.
Thomas Piecha (University of Tübingen, Department of Computer Science, Germany)
Karl R. Popper: Logical Writings
ABSTRACT. Karl Popper developed a theory of deductive logic in a series of papers in the late 1940s. In his approach, logic is a metalinguistic theory of deducibility relations that are based on certain purely structural rules. Logical constants are then characterized in terms of deducibility relations. Characterizations of this kind are also called inferential definitions by Popper. His works on logic anticipate several later developments and discussions in philosophical logic, and are thus interesting from both historical and systematic points of view [1]:
- Anticipating the discussion of connectives like Prior's "tonk", Popper considered a tonk-like connective called the "opponent" of a statement, which leads, if it is present in a logical system, to the triviality of that system.
- He suggested to develop a system of dual-intuitionistic logic, which was then first formulated and investigated by Kalman Joseph Cohen in 1953.
- He already discussed (non-)conservative language extensions. He recognized, for example, that the addition of classical negation to a system containing implication can change the set of deducible statements containing only implications, and he gave a definition of implication with the help of Peirce's rule that together with intuitionistic negation yields classical logic.
- He also considered the addition of classical negation to a language containing intuitionistic as well as dual-intuitionistic negation, whereby all three negations become synonymous. This is an example of a non-conservative extension where classical laws also hold for the weaker negations.
- Popper was probably the first to present a system that contains an intuitionistic negation as well as a dual-intuitionistic negation. By proving that in the system so obtained these two kinds of negation do not become synonymous, he gave the first formal account of a bi-intuitionistic logic.
- He provided an analysis of logicality, in which certain negations that are weaker than classical or intuitionistic negation turn out not to be logical constants.
A critical edition of Popper's logical writings is currently prepared by David Binder, Peter Schroeder-Heister and myself [2], which comprises Popper's published works on the subject as well as unpublished material and Popper's logic-related correspondence, together with our introductions and comments on his writings.
In this talk I will introduce this edition in order to provide an overview of Popper's logical writings, and I will highlight the central aspects of Popper's approach to logic.
References
[1] David Binder and Thomas Piecha: Popper’s Notion of Duality and His Theory of Negations, History and Philosophy of Logic 38(2), 154--189, 2017.
[2] David Binder, Thomas Piecha and Peter Schroeder-Heister: Karl R. Popper: Logical Writings, to appear in 2019.
ABSTRACT. Since Karl Popper’s work in logic and its philosophy from the 1940s was, unfortunately, mostly neglected, the publication of a critical edition of Popper’s logical writings is a remarkable and fruitful event for the academic community. Thomas Piecha discusses in his talk some central issues of Popper’s work that have been later developed by contemporary logicians, most of which are concerned with logical negation. In my intervention, I discuss Popper’s logico-philosophical motivation for deducibility relations structurally defined (work developed latter by [Dana Scott 1971, 1974] and [Arnold Koslow 1992]) and for logical negation. Popper’s attempt to weaken classical logic in mathematical proofs by weakening the rules for logical negation is important and deserves closer attention. For instance, [Saul Kripke 2015] has recently argued that an affirmativist logic, i.e., a logic without negation and falsity, which is semantically distinct from [Hilbert and Bernays 1934]’s positive logic because it also eliminates falsity, is all that science needs. In this respect, the analysis of the (non-) conservative extensions of a minimal system of logic is necessary and philosophically useful. Popper’s interest in logical negation was also generated by [Carnap 1943]’s discovery of non-normal models for classical logic that arise mainly due to negation (an important issue today for classical logical inferentialists).
References
Carnap, Rudolf. 1943. Formalization of Logic, Cambridge, Mass., Harvard University Press.
Hilbert, David, & Bernays, Paul. 1934/1968. Grundlagen der Mathematik, I. Berlin, Heidelberg, New York: Springer.
Kripke, Saul. 2015. ‘Yet another dogma of logical empiricism’. Philosophy and Phenomenological Research, 91, 381-385.
Koslow, Arnold. 1992. A Structuralist Conception of Logic, Cambridge University Press.
Scott, Dana. 1971. ‘On engendering an illusion of understanding’, The Journal of Philosophy 68(21):787–807.
Scott, Dana. 1974. ‘Completeness and axiomatizability in many-valued logic’, in L. Henkin et al. (eds.), Proceedings of the Tarski Symposium, vol. 25 of Proceedings of Symposia in Pure Mathematics, American Mathematical Society, pp. 411–436.
Science as Critical Discussion and Problem of Immunizations
ABSTRACT. The value of ideal of a critical discussion is something that should be shared by scientists. It is because in the core of a critical discussion is an inter-subjective evaluation of given propositions, facts, evidence. I will argue that (A) the pursuit of this ideal can also be taken as a possible demarcation criterion for science, at least concerning its demarcation from pseudo-sciences.
Pseudo-sciences are characterized as something that wants to be or looks like science, but it is not. Uses of unfounded immunizations are one of the possible signs of pseudo-science (Derksen 1993). In general, immunizations (immunizing strategies or stratagems) prevent for a theory to be falsified or reasonably denied. This concept was initially introduced by Popper (1959/2005) as a conventionalist trick. Popper identified four types: an introduction of ad hoc hypotheses, a modification of ostensive (or explicit) definition, a skeptical attitude as to the reliability of the experimenter, and casting doubt on the acumen of the theoretician. Later, Boudry and Braeckman (2011) provided an overview of immunizing strategies identifying five different types: conceptual equivocations and moving targets, postdiction and feedback loops, conspiracy thinking, changing the rules of play, and invisible escape clauses. They also provided several examples to each type. But more importantly, they presented a definition of immunizing strategies: “[a]n immunizing strategy is an argument brought forward in support of a belief system, though independent from that belief system, which makes it more or less invulnerable to rational argumentation and/or empirical evidence.”
Although I do consider immunizations as an indication of pseudo-science, I will argue that (B) immunizations are not arguments as Boudry and Braeckman proposed but rather (C) immunizations are violations of rules of a critical discussion. To support the first part of this claim (B), I will present an analysis of selected examples provided by Boudry and Braeckman using the Toulmin’s model of argument (Toulmin 1958/2003). Regarding the second part (C), I will show that analyses of these examples as violations of rules of a critical discussion in pragma-dialectical theory (van Eemeren & Grootendorst 2004) is more suitable.
In conclusion, immunizations prevent a critical discussion, and therefore reasonable process where inter-subjective evaluation of claims plays a significant role. The evidence, facts, theories and similar are accepted in science by the scientific community, not by individuals. Thus, inter-subjectivity is characteristic for science, and lack of it is typical for pseudo-sciences. Therefore, (A) science can be characterized as an attempt of a critical discussion where the goal is to solve a difference of opinion by reasonable means.
References:
Boudry, M., & Braeckman, J. (2011). Immunizing Strategies and Epistemic Defense Mechanisms. Philosophia, 39(1), 145–161.
Derksen, A. A. (1993). The Seven Sins of Pseudo-Science. Journal for General Philosophy of Science, 24(1), 17–42.
Popper, K. (1959/2005). The Logic of Scientific Discovery. Routledge.
Toulmin, S. E. (1958/2003). The Uses of Argument. Cambridge University Press.
van Eemeren, F. H., & Grootendorst, R. (2004). A Systematic Theory of Argumentation: The pragma-dialectical approach. Cambridge University Press.
17:15
Michael Sidiropoulos (Member of Canadian Society for the History and Philosophy of Science, Canada)
PHILOSOPHICAL AND DEMARCATION ASPECTS OF GLOBAL WARMING THEORY
ABSTRACT. In their effort to explain a phenomenon, scientists use a variety of methods collectively known as “the scientific method”. They include multiple observations and the formulation of a hypothesis, as well as the testing of the hypothesis through inductive and deductive reasoning and experimental testing. Rigorous skepticism and refinement or elimination of the hypothesis are also part of the process. This work presents an updated concept of the scientific method with the inclusion of two additional steps: demarcation criteria and scientific community consensus. Demarcation criteria such as hypothesis testing and falsifiability lead to a proposed “Popper Test”. The method is applied to fundamental aspects of Global Warming theory (GW).
David Hume’s “problem of induction” is the concern of making inductive inferences from the observed to the unobserved. It is shown that this issue is crucial to GW, which claims that temperature observations of the last 100 years create a new pattern of systematic warming caused by human activity. We use the term “Global Warming” rather than “Climate Change” as it is more amenable to falsification and is therefore a stronger scientific theory.
Natural phenomena can have multiple causes, effects and mitigating factors. Many interrelationships among these are not well understood and are often proxied by statistical correlations. Certain statistical correlations exist because of a causal relationship and others exist in the absence of causal relationships. Statistical correlations can lead to the formulation of a theory but do not constitute proof of causality. This must be provided by theoretical and experimental science. Trial and error leads to model enhancement as, for example, climate models have recently been modified to include the effect of forests, an important missing variable in prior models.
Tests comprising the proposed method are applied on fundamental assumptions and findings of both parts of GW theory: (1) Rising global temperatures, (2) Anthropogenesis of rising global temperatures. Several premises of the theory are found falsifiable within the means of current technology and are therefore scientific theories. Certain other premises are found to be unfalsifiable and cannot be included in a scientific theory. The latter must be eliminated or be substituted by alternative falsifiable proposals.
1 Popper, Karl, 1959, The Logic of Scientific Discovery, Routledge.
2 Hume, David, 1739, A Treatise of Human Nature, Oxford: Oxford University Press.
On the definitions of social science and why they matter
ABSTRACT. What sort of category is ‘social science’? Is it theoretical, that is, reflecting a genuine specialness of social sciences’ subject matter or method? Or merely institutional, that is, denoting the activities and the body of knowledge of those recognised as practicing economics, sociology, anthropology, etc.? The field of philosophy of social science has traditionally assumed the former and sought to articulate ways in which social sciences are unlike the natural ones. I trace the history and the motivations behind this exceptionalism and evaluate its viability in this age of interdisciplinarity and data-driven methods.
Numbers as properties; dissolving Benacerraf’s Tension
ABSTRACT. Generations of mathematicians and philosophers have been intrigued by the question, What are arithmetic propositions about? I defend a Platonist answer: they're about numbers, and numbers are plural properties.
I start with the seminal "Mathematical Truth" (1973), where Benacerraf showed that if numbers exist, there is a tension between their metaphysical and epistemological statuses. Even as Benacerraf's particular assumptions have been challenged, this tension has reappeared. I bring it out with two Benacerrafian requirements:
Epistemic requirement. Any account of mathematics must explain how we can have mathematical knowledge.
Semantic requirement. Any semantics for mathematical language must be homogenous with a plausible semantics for natural language.
Each of the prominent views of mathematical truth fails one of these requirements.
If numbers are abstract objects, as the standard Platonist says, how is mathematical knowledge possible? Not by one common source of knowledge-causal contact. Field (1989) argues that the epistemological problem extends further: if numbers are abstract objects, we cannot verify the reliability of our mathematical belief-forming processes, even in principle.
If mathematical truth amounts to provability in a system, as the combinatorialist says, the semantics for mathematical language is unlike those semantics normally given for natural language ('snow is white' is true iff snow is white, vs. '2 + 2 = 4' is true iff certain syntactic facts hold).
I argue that numbers are properties.
Epistemic requirement. We're in causal contact with properties, so we're in causal contact with numbers. More generally, because a good epistemology must account for knowledge of properties, any such theory should account for mathematical knowledge.
Semantic requirement. Just as 'dog' refers to the property doghood, '2' refers to the property being two. Just as 'dogs are mammals' is true iff a certain relation holds between doghood and mammalhood, '2 + 2 = 4' is true iff a certain relation holds between being two and being four.
Specifically, I say that numbers are what I call pure plural properties. A plural property is instantiated by a plurality of things. Take the fact that Thelma and Louise cooperate. The property cooperate doesn't have two argument places: one for Thelma, and one for Louise. Rather, it has a single argument place: here it takes the plurality, Thelma and Louise. Consider another property instantiated by this plurality: being two women. This plural property is impure because it does not concern only numbers, but we can construct it out of two other properties, womanhood and being two. This latter plural property is a pure. It is the number two.
Famously, number terms are used in two ways: referentially ('two is the smallest prime') and attributively ('I have two apples'). If numbers are objects, the attributive use is confounding (Hofweber 2005). If they're properties, there is no problem: other property terms share this dual use ('red is my favorite color' vs. 'the apple is red').
The standard Platonist posits objects that are notoriously mysterious. While the nature of properties may be contentious, my numbers-as-properties view is not committed to anything so strange.
The development of epistemic objects in mathematical practice: Shaping the infinite realm driven by analogies from finite mathematics in the area of Combinatorics.
ABSTRACT. We offer a case study of mathematical theory building via analogous reasoning. We analyse the conceptualisation of basic notions of (topological) infinite graph theory, mostly exemplified by the notion of infinite cycles. We show in how far different definitions of “infinite cycle” were evaluated against results from finite graph theory. There were (at least) three competing formalisations of “infinite cycles” focusing on different aspects of finite ones. For instance, we might observe that in a finite cycle every vertex has degree two. If we take this as the essential feature of cycles, we can get to a theory of infinite cycles. A key reason for the rejection of this approach is that some results from finite graph theory do not extend (when we syntactically change “finite graph”, “finite cycle”, etc. to “infinite graph”, “infinite cycle” etc.
The activity to axiomatise a field is no purely mathematical one, which cannot be solved by proof but by philosophical reflection. This might sound trivial but is often neglected due to an over simplified aprioristic picture of mathematical research. We must craft our formal counterparts in mathematics by our intuitions of the abstract concepts/objects. While we normally think of a mathematical argument as the prototype of deductive reasoning, there are inductive elements in at least three senses:
1. In the heuristic of developing.
2. In the process of axiomatisation, while
2a. we test the adequacy of an axiomatisation.
2b. we are looking for new axioms to extend a current axiomatic system.
We want to focus on 2a and especially on the role of analogies. Nash-Williams (1992, p. 1) analysed that “the majority of combinatorialists seem to have concentrated on finite combinatorics, to the extent that it has almost seemed an eccentricity to think that graphs and other combinatorial structures can be either finite or infinite”. This observation is still true, but more recently a growing group of combinatorialists work on infinite structures. We want to analyse the heuristics of theory development in this growing area.
There are vocabularies from finite graph theory for which it is not clear which infinite counterpart might be the best concept to work with. And for theorems making use of them it is also unclear whether they can or should have an extension to infinite graphs. This area is very suitable for the philosophical discourse, since the used concepts are quite intuitive and involve only a little background from topology and graph theory.
We argue that mathematical concepts are less fixed and eternal than it might seem. Shifting the focus from the sole discussion of theorems, which is overrepresented in the reflections of philosophers of mathematics, towards the interplay of definitions and theorems. While theorems are a very important (and probably even constitutive) part of the practice of mathematicians, we should not forget that mathematical concepts in the sense of concepts used in mathematical discourses develop over time. We can only proof / state / comprehend with fitting vocabulary, which we develop in a process of iterative refinement.
Prerequisite for Employing Intelligent Machines as Human Surrogate
ABSTRACT. This paper discusses qualifying conditions for employing and utilizing intelligent machines as human surrogate. It thereby induces us to philosophical reflection on the rationality of our own act of designing, manufacturing and utilizing such artifacts. So long as the conditions discussed here are not realized, recruitment and use of such system shall be advised as unreasonable, thus unacceptable.
Machines equipped with higher level of AI will surrogate the human role in ever-increasing range. Decision of the extent and appropriate mode of such surrogacy is an important societal task about which we should make decisions. For this purpose we should first analyze the question, what the prerequisite condition for acceptable robotic surrogacy is.
This paper discusses the primary condition for intelligent machines in order to be justified to surrogate human agents. Seen from the viewpoint of the analysis of this paper, it will be hard for a machine to satisfy the requirement. It suggests that our societies proceed more carefully with such intelligent artifacts than they do now.
This paper discusses the case of autonomous vehicles that are coming incorporated into our transportation system as primary example. An essential condition that a driver-AI should fulfill in order to legitimately take the driver role in our traffic system is that it assume and apply a perceptual taxonomy that is sufficiently near to that of the competent human drivers, so that it tends to make eventually the same classification of all the objects encountered while driving as the human drivers do.
How can we make the intelligent artifacts to share human systematics? We may consider two paths. One is the method of inputting the classification system we are adopting and applying into the program (top-down inscription), and the other is to let them interact with human beings in the real world and thereby learn it bit by bit (bottom-up learning). But neither way seems to be effective for realization of the goal. I will show why.
I suggested an ontological stance that interprets AI as a form of “externalized mind”(Ko 2012). Externalized mind is a boundary type of extended mind. The former differs from the latter in that 1) it does not belong to a specific cognitive agent, even though it might owe its origin to certain agent, and 2) it does not necessarily have a center for which the extended resources of cognition serve and supplement.
The AI that operates a networked system of unmanned vehicles, for instance, can be interpreted as the externalized collective mind of decision-makers about the system, of those engineers who perform design in details, engineers of manufacturing and implementing, and administrators of the system. This way of ontological interpretation stands against the philosophical stance that grants the status of independent agents to smart robots (e.g., in terms of the “electronic personhood”). I argue that it is a philosophical scheme adequate for addressing the issue of responsibility and contribution allocation associated with the application of the intelligent machines. From this perspective, I will stand with J. Bryson in the debate on the socio-ethical status of robots (Gunkel 2018 & Bryson 2018).
Reference
Bryson, J., (2018), “Patiency is not a virtue: the design of intelligent systems and systems of ethics”, Ethics and Information Technology 20.
Gunkel, D. (2018), Robot Rights, MIT Press.
Ko, I. (2012), “Can robots be accountable agents?”, Journal of the New Korean Philosophical Association 67/1.
17:15
Bennett Holman (Underwood International College, Yonsei University, South Korea)
Dr. Watson: The Impending Automation of Diagnosis and Treatment
ABSTRACT. Last year may be remembered as the pivotal point for artificial “deep learning” and medicine. A large number of different labs have used Artificial intelligence (AI) to augment some portion of medical practice, most notably in diagnosis and prognosis. I will first review the recent accomplishments of deep-learning AI in the medical field, including: the landmark work of Esteva et al. (2017) which showed that AI could learn to diagnose skin cancer better than a dermatologist; extensions of similar projects into detecting breast cancer (Liu et al., 2017); Oakden-Rayner et al.’s (2017) work showing AI could create its own ontological categories for patient risk; and through analyzing tumor DNA identify more possible sites for intervention (Wrzeszczynski et al., 2017).
I will next argue that a forseeable progression of this technology is to begin automating treatment decisions. Whether this development is positive or negative depends on the specific details of who develops this technology and how it is used. I will not attempt to predict the future, but I will run out some emerging trends to their logical conclusions and identify some possible pitfalls of the gradual elimination of human judgment from medical practice.
In particular some problems could become significantly worse. It is the essence of deep learning AI that reasons for its outcomes are opaque. Many researchers have shown that industry has been adept at causing confusion by advancing alternative narratives (e.g. Oreskes and Conway, 2010), but at the very least with traditional research there were assumptions that could, in principle, be assessed. With this deep learning AI there are no such luxuries. On the other hand, I will argue that properly implemented deep learning solves a number of pernicious problems with both the technical and the social hindrances to reliable medical judgments (e.g. the end to a necessary reliance on industry data). Given the multiple possible routes that such technology I argue that consideration of how medical AI should develop is an issue that will not wait and thus demands immediate critical attention of philosophy of science in practice.
Esteva, Andre, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau, and Sebastian Thrun. "Dermatologist-level classification of skin cancer with deep neural networks." Nature 542, no. 7639 (2017): 115-118.
Liu, Y., Gadepalli, K., Norouzi, M., Dahl, G. E., Kohlberger, T., Boyko, A., ... & Hipp, J. D. (2017). Detecting
cancer metastases on gigapixel pathology images. arXiv preprint arXiv:1703.02442.
Oakden-Rayner, L., Carneiro, G., Bessen, T., Nascimento, J. C., Bradley, A. P., & Palmer, L. J. (2017). Precision Radiology: Predicting longevity using feature engineering and deep learning methods in a radiomics framework. Scientific Reports, 7(1), 1648
Oreskes, N. and E. Conway (2010), Merchants of Doubt. New York: Bloomsbury Press
Wrzeszczynski, K. O., Frank, M. O., Koyama, T., Rhrissorrakrai, K., Robine, N., Utro, F., ... & Vacic, V. (2017). Comparing sequencing assays and human-machine analyses in actionable genomics for glioblastoma. Neurology Genetics, 3(4), e164.
Nathalie Gontier (Applied Evolutionary Epistemology Lab, Center for Philosophy of Science, University of Lisbon, Portugal)
Time, causality and the transition from tree to network diagrams in the life sciences
ABSTRACT. How we conceptualize and depict living entities has changed throughout the ages in relation to changing worldviews. Wheels of time originally depicted cycles of life or the cyclic return of the seasons that associated with circular notions on time. Aristotle introduced the concept of a great chain of being that became foundational for Judeo-Christian theorizing on scala naturae that originally associate with a-historic depictions of nature and later with linear timescales. Scales of nature in turn formed the basis for phylogenetic trees as they were introduced by the natural history scholars of the 19th century. And these trees are set in two-dimensional Cartesian coordinate systems where living entities are tracked across space and time. Today, the various disciplines that make up the evolutionary sciences often abandon tree typologies in favor of network diagrams. Many of these networks remain historically unrooted and they depict non-linear causal dynamics such as horizontal information exchange. We investigate these transitions and hone in on how networks introduce new concepts of causality. We end by delineating how a reconstruction of the genealogy of these diagrams bears larger consequences for how scientific revolutions come about.
Evolving theories and scientific controversies: a carrier-trait approach
ABSTRACT. It is not without problems to account for the evolution of scientific theories. Reconstruction of theoretical content is typically carried out using static and atemporal conceptual and modelling spaces, but many of the historically important scientific theories are far from being easily delineable entities, and a scientist’s theoretical position can respond to new data, literature, and the criticisms received. Especially challenging are scientific controversies, where the debated issues are complex, the exchange involves several participants, and extends over long periods. Famous examples include the Methodenstreit, the Hering-Helmholtz controversy or the debates over Newton’s or Darwin’s views. In these cases controversies lasted for several generations, and polarisation is a recurring trait of the exchanges. The reconstructions and evaluations of the exchanges also exhibit heterogeneity and polarisation. Cultures of reading, representing, interpreting, and evaluating the theory suggest that some scientific theories are manifolds.
What are the suitable frameworks that help the study of theories, theory-acceptance and the often co-occurring process of opinion-polarization? The talk offers a permissivist carrier-trait framework to study theories, and an artifact-human-artefact knowledge-mobilization process. The theory picked for analysis is Newton’s optical theory, a highly successful scientific theory, but one that cannot be easily reduced to equations or formulas, and one that gave rise to opinion polarization.
Instead of assuming some type of content (a propositional structure, a conceptual space, or a mathematical object) to reconstruct the theory, and thus provide a paraphrase to stand for the theory, I look at traits that are delineable when studying the carriers of a theory. In a deliberately broad definition, carriers are scientific representations, parts thereof, or composites of them, targets of an interpretation-process. A carrier is an external (non-mental) representation, akin to some speech act, yet it can be a whole book, or just a part of a diagram or sentence. A trait is a distinctive or distinguishable feature, corresponding to some act of making distinctions between carriers.
The reconstruction focuses on innovative aspects (novel traits) of theories that become conventionalized: items introduced to the lexicon (neologisms), some of the mathematical idealizations, and novel diagrammatic traits of the theory. The perspective helps to map strands of uptake (including polarisation of opinions), and trait-analysis can show that multiple readability (ambiguity) of carriers facilitated heterogeneous uptake and the spread of competing views.
ABSTRACT. The frame model was developed in cognitive psychology (Barsalou 1992) and imported into the philosophy of science in order to provide representations of scientific concepts and conceptual change (Andersen and Nersessian 2000; Andersen et al. 2006; Chen and Barker 2000; Chen 2003; Barker et al. 2003; Votsis and Schurz 2012; Votsis and Schurz 2014). The aim of my talk is to show that beside the representation of scientific concepts the frame model is an efficient instrument to represent and analyze scientific theories. That is, I aim to establish the frame model as a representation tool for the structure of theories within the philosophy of science.
In order to do so, in the first section of my talk, I will briefly introduce the frame model and develop the notion of theory frames as an extension of it. Further, I will distinguish between theory frames for qualitative theories, in which scientific measurement is based on nominal scales, and theory frames for quantitative theories, in which measurement is based on ratio scales. In two case studies, I will apply the notion of theory frames to a linguistic and a physical theory.
Section 2 contains a diachronic analysis of a qualitative theory by applying the notion of a theory frame to the pro drop theory of generative linguistics. In section 3, I will provide a frame-based representation of electrostatics, the laws of which contain quantitative theoretical concepts.
Based on the two case studies, I will argue that the frame model is a powerful instrument to analyze the laws of scientific theories, the determination of theoretical concepts, the explanatory role of theoretical concepts, the abductive introduction of a new theoretical concept, the diachronic development of a theory, and the distinction between qualitative and quantitative scientific concepts. I will show that due to its graphical character the frame model provides a clear and intuitive representation of the structure of a theory as opposed to other models of theory representation like, for instance, the structuralist view of theories.
Literature
Andersen, Hanne, and Nancy J. Nersessian. 2000. “Nomic Concepts, Frames, and Conceptual Change.” Philosophy of Science 67 (Proceedings): S224-S241.
Andersen, Hanne, Peter Barker, and Xiang Chen. 2006. The Cognitive Structure of Scientific Revolutions. Cambridge: University Press.
Barker, Peter, Xiang Chen, and Hanne Andersen. 2003. “Kuhn on Concepts and Categorization.” In Thomas Kuhn, ed. Thomas Nickles, 212-245. Cambridge: University Press.
Barsalou, Lawrence W. 1992. “Frames, concepts, and conceptual fields.” In Frames, fields, and contrasts, ed. Adrienne Lehrer, and Eva F. Kittay, 21–74. Hillsdale: Lawrence Erlbaum Associates.
Chen, Xiang. 2003. “Object and Event Concepts. A Cognitive Mechanism of Incommensurability.” Philosophy of Science 70: 962-974.
Chen, Xiang, and Peter Barker. 2000. “Continuity through Revolutions: A Frame-Based Account of Conceptual Change During Scientific Revolutions.” Philosophy of Science 67:208-223.
Votsis, I., and Schurz, G. 2012. “A Frame-Theoretic Analysis of Two Rival Conceptions of Heat.” Studies in History and Philosophy of Science, 43(1): 105-114.
Votsis, I., and Schurz, G. 2014. “Reconstructing Scientific Theory Change by Means of Frames.” In Concept Types and Frames. Application in Language, Cognition, and Science, ed. T. Gamerschlag, D. Gerland, R. Osswald, W. Petersen, 93-110. New York: Springer.
ABSTRACT. Abstraction is ubiquitous in scientific model construction. It is generally understood to be synonymous to omission of features of target systems, which means that something is left out from a description and something else is retained. Such an operation could be interpreted so as to involve the act of subtracting something and keeping what is left, but it could also be interpreted so as to involve the act of extracting something and discarding the remainder. The first interpretation entails that modelers act as if they possess a list containing all the features of a particular physical system and begin to subtract in the sense of scratching off items from the list. Let us call this the omission-as-subtraction view. According to the second interpretation, a particular set of features of a physical system is chosen and conceptually removed from the totality of features the actual physical system may have. Let us call the latter the omission-as-extraction view.
If abstraction consists in the cognitive act of omission-as-subtraction this would entail that scientists know what has been subtracted from the model description and thus would know what should be added back into the model in order to turn it into a more realistic description of its target. This idea, most of the time, conflicts with actual scientific modeling, where a significant amount of labor and inventiveness is put into discovering what should be added back into a model. In other words, the practice of science provides evidence that scientists, more often than not, operate without any such knowledge. One, thus, is justified in questioning whether scientists actually know what they are subtracting in the first case. Since it is hard to visualize how modelers can abstract, in the sense of omission-as-subtraction, without knowing what they are subtracting, one is justified in questioning whether a process of omission-as-subtraction is at work.
In this paper we particularly focus on theory-driven models and phenomenological models in order to show that for different modeling practices what is involved in the model-building process is the act of extracting certain features of physical systems, conceptually isolating and focusing on them. This is the sense of omission-as-extraction, that we argue is more suitable for understanding how scientific model-building takes place before the scientist moves on to the question of how to make the required adjustments to the model in order to meet the representational goals of the task at hand. Furthermore, we show that abstraction-as-extraction can be understood as a form of selective attention and as such could be distinguished from idealization.
Laszlo E. Szabo (Institute of Philosophy, Eotvos Lorand University Budapest, Hungary)
Intrinsic, extrinsic, and the constitutive a priori
ABSTRACT. On the basis of what I call physico-formalist philosophy of mathematics, I will develop an amended account of the Kantian–Reichenbachian conception of constitutive a priori. It will be shown that the features (attributes, qualities, properties) attributed to a real object are not possessed by the object as a “thing-in-itself”; they require a physical theory by means of which these features are constituted. It will be seen that the existence of such a physical theory implies that a physical object can possess a property only if other contingently existing physical objects exist; therefore, the intrinsic–extrinsic distinction is flawed.
The Spanish Society of Logic, Methodology and Philosophy of Science (SLMFCE in its Spanish acronym) is a scientific association formed by specialists working in these and other closely related fields. Its aims and scope cover also those of analytic philosophy in a broad sense and of argumentation theory. It is worth mentioning that among its priorities is the support and promotion of young researchers. To this aim, the Society has developed a policy of grants and awards for its younger members.
The objectives of the SLMFCE are to encourage, promote and disseminate study and research in the fields above mentioned, as well as to foster contacts and interrelations among specialists and with other similar societies and institutions. The symposium is intended to present the work carried out by prominent researchers and research groups linked to the Society. It will include four contributions in different subfields of specialization, allowing the audience at the CLMPST 2019 to form an idea of the plural research interests and relevant outcomes of our members.
On revision-theoretic semantics for special classes of circular definitions
ABSTRACT. Circular definitions are definitions that contain the definiendum in the definiens. The revision theory of circular definitions, created by Anil Gupta, shows that it is possible to give content to circular definitions and to use them to solve the semantic paradoxes. Let us consider definitions of the form Gx := A(x,G), where A is a first-order formula in which G itself can occur. Given a model M for the language, possible extensions for the predicate G are given by subsets of the domain of the model (which are called hypotheses). Given a hypothesis h, the revision of h (denoted D(h)) is the set of all the elements of the domain which satisfy the definiens in the model M+h (i.e., the model M with the hypothesis that the extension of G is h). Revision can be iterated, generating the sequence of revision: h, D(h), D(D(h))… which is represented as D^0(h), D^1(h), D^2(h)… Roughly speaking, the key idea of revision theory is that one can categorically assert that an object is G when, for every hypothesis h, the object eventually stabilises in the sequence of revision that starts with h, i.e., it belongs to all the hypotheses in the sequence after a certain ordinal.
Gupta (in “On Circular Concepts”, in Gupta and Chapuis, “Circularity, Definition and Truth”, Indian Council for Philosophical Research, 2000) defined a special type of definitions, called finite definitions, and proved that this class of definitions has nice formal properties, for instance, there is a natural deduction calculus sound and complete for their validities.
The aim of the talk is to introduce several generalizations of finite definitions that still preserve many of their good properties. Given a type of hypotheses T, we will define the following four classes of special circular definitions:
(i) A definition is a T-definition iff for each model M, there is a hypothesis of type T.
(ii) A definition is a uniformly T-definition iff for each model M and each hypothesis h, there is n such that D^n(h) is of type T.
(iii) A definition is a finitely T-definition iff for each M there is n such that, for each h, D^n(h) is of type T.
(iv) A definition is a bounded T-definition iff there is n such that for every M and h, D^n(h) is of type T.
A finite definition is, in this notation, a finitely T-definition, where T is the type of reflexive hypotheses (i.e., those that generate cycles in the directed graph that connects h to D(h)). We will analyze the relations among the different classes of definitions, focusing in the types of reflexive hypotheses and descending hypotheses (i.e. hypotheses which belong to Z-chains in the directed graph).
Common solutions to several paradoxes. What are they? When should they be expected?
ABSTRACT. In this paper I will examine what a common solution to more than one paradox is and why, in general, such a solution should be expected. In particular, I will explore why a common solution to the Liar and the Sorites should be expected. Traditionally, the Sorites and the Liar have been considered to be unrelated. Nevertheless, there have been several attempts to uniformly cope with them. I will discuss some of these attempts in the light of the previous discussion.
Lavinia Marin (Delft University of Technology, Netherlands)
Online misinformation as a problem of embodied cognition
ABSTRACT. This paper argues that the creation and propagation of misinformation in online environments, particularly in social media, is confronted with specific challenges which are not to be found in offline communication. Starting from the widely accepted definition of misinformation as 'deliberate production and distribution of misleading information' (Floridi, 2013) which we designate as the semantic view of misinformation, we aim to provide a different definition of misinformation based primarily on the pragmatics of communication and on the role of the technological environment. While misinformation in online environments is also false and misleading, its main characteristic is the truncated way in which it is perceived and re-interpreted and, we will argue, this way of processing information belongs foremost to the online environment as such rather than to a defective way of information-processing from the side of the epistemic agent. From this pragmatic perspective, sometimes misinformation is true information which is interpreted and propagated in a biased way. One of the major features of the online environments which makes it for a medium prone to mis-interpretation and bias concerns a way of leading to impoverished sensory information processing. Assuming an embodied cognition view – in its compatibilist version, see (Varela et al., 1991; Clark, 1997) - then the environment in which we exercise our cognitive abilities has a deciding role for our ability to function as epistemic agents because through our bodies and we acquire cognitive states dependent on the environment to which our bodies are exposed. Following this embodied cognition assumption, then the online environment presents itself as a challenge through the ways in which it prioritises certain senses while obliterating others: the visual senses are primordial to the detriment of other senses such as touch, smell, and even hearing; moreover, we interact with others in online environments through text messages which favor explicit meanings while tacit communication and other pragmatic aspects of communication relying on body-language and non-verbal signs are lost.
This presentation will describe the constellation of aspects which characterise the pragmatics of communication in online environments and then show why this kind of communicational situation is biased leading to what we will call an 'incomplete pragmatics' of communication. In online environments, we will argue, misunderstandings are the rule and not the exception, because of the dis-embodied and text-biased forms of communication. We will illustrate our theory of incomplete pragmatics of online communication with several case studies of online misinformation based on factually true information which is systematically misunderstood.
References
Clark, A. (1997). Being There: Putting Brain Body and World Together Again. Cambridge, MA: MIT Press.
Clark, A. and Chalmers, D. (1998). The extended mind. Analysis, 58, 7-19.
Floridi, Luciano. The philosophy of information. OUP Oxford, 2013.
Varela, F., Thompson, E., Rosch, E. (1991). The Embodied Mind. Cambridge, MA: MIT Press.
The Role of Cognitive and Behavioral Research on Implicit Attitudes in Ethics
ABSTRACT. Empirical research and philosophy have both recently been interested in what implicit attitudes and biases are and in how they can predict prejudicial behavior towards certain groups (Brownstein 2017; Brownstein, Saul 2017: 1–19). The assumption on the basis of such data is that human beings are sensitive to their own and others’ group identities. These researches I focus on show an effect that mostly goes under the radar: even subjects that have non-racist explicit attitudes can and often are biased by implicit stereotypes and prejudices concerning group identities.
The aim of this talk is to look at this set of data from the perspective of moral philosophy. Thus, on the one hand, I will be interested in analyzing if and to what extent implicit attitudes have an impact on abilities that are crucial for moral judgment and for moral decision-making – the descriptive goal – and, on the other, I will consider whether this influence should bear any normative significance for moral theory – the normative goal. I will deal with whether these implicit attitudes have an impact on some of the key components constituting the basis of our moral abilities, regardless of whether one can be deemed responsible or not. I will thus consider the effects – if any – that implicit attitudes have on empathy and trust, understanding both as related to our way of judging and behaving morally (for discussion on this, e.g. Faulkner, Simpson 2017; Stueber 2013). I will focus on the effects on empathy and trust on the basis of the widely shared – though not universally accepted – assumption that they both play a relevant – though not exclusive – role in morality, and given that experimental data on implicit attitudes seem to suggest at least an unconscious proclivity towards empathizing and trusting in-groups more than out-groups.
My main claims will be:
(a) The descriptive claim: Implicit in-group bias directly modulates empathy and automatic trust, while it has only a derivative influence on sympathy and deliberated trust.
(b) The normative claim: If moral duties or standards are meant to guide human behavior, then knowing about our implicit biases towards in-groups restricts the set of moral theories that can be prescribed (according to a criterion of psychological realizability). And yet this is not tantamount to claiming that we cannot and should not take action against our implicit attitudes once we have recognized their malevolent influence upon us.
References (selection)
Brownstein, M. 2017, "Implicit Bias", The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), E. N. Zalta (ed.), https://plato.stanford.edu/archives/spr2017/entries/implicit-bias/.
Brownstein, M., and Saul, J. (eds.) 2017, Implicit Bias and Philosophy, Volume 1: Metaphysics and Epistemology, New York: Oxford University Press.
Faulkner, P., and Simpson, T. 2017, The Philosophy of Trust, Oxford: Oxford University Press.
Simpson, T. W. 2012, “What Is Trust?”, Pacific Philosophical Quarterly, 93, pp. 550–569.
Stueber, K. 2013, “Empathy”, The Stanford Encyclopedia of Philosophy (Fall 2016 Edition), E. N. Zalta (ed.), http://plato.stanford.edu/archives/fall2016/entries/empathy/.
Semiotic analysis of Dedekind’s arithmetical strategies
ABSTRACT. In this talk, I will present a case study, which uses close reading of [Dedekind 1871] to study semiotic processes of mathematical text. I focus on analyzing from a semiotic perspective what Haffner [2017] describes as ‘strategical uses of arithmetic’ employed by Richard Dedekind (1831-1916) and Heinrich Weber (1842-1913) in their joint work on function theory [Dedekind and Weber 1882]. My analysis of Dedekind’s representations shows that neither word-to-word correspondences with other texts (e.g. texts on number theory), nor correspondences between words and stable referents fully capture Dedekind’s “strategic use of arithmetic” in [Dedekind 1871]. This use is thus the product of a textual practice, not a structural correspondence to which the text simply refers.
An important line of argument in [Haffner 2017] is that a mathematical theory (be it function theory as in [Dedekind and Weber 1882] or ideal theory as in [Dedekind 1871]) becomes arithmetical by introducing concepts that are ‘similar’ to number theoretical ones, and by transferring formulations from number theory. Haffner’s claim only emphasizes why we need a better understanding of the production of analogy as a semiotic process. Since the definitions and theorems of [Dedekind 1871] do not correspond word-for-word to number theoretical definitions and theorems, simply saying that two concepts or formulations are ’similar’ neglects to describe the signs that make us see the similarities. Thus, appealing to similarity cannot account for the semiotic processes of the practice that produces analogy of ideal theory to number theory. The case study aims to unfold Haffner’s appeals to similarity through detailed descriptions of representations that Dedekind constructs and uses in [1871].
Dedekind is often pointed to as a key influence in shaping present mathematical textual practices and a considerable part of this influence stems from his development of ideal theory, of which [Dedekind 1871] is the first published version. Therefore, apart from being interesting in their own right, a better understanding of the semiotic processes of this text could contribute to our views on both present mathematical textual practices and late-modern history of mathematics.
References:
Dedekind, R. (1871). Über die Komposition der binären quadratischen Formen. In [Dedekind 1969 [1932]], vol. III. 223-262.
Dedekind, R. (1969) [1932]. Gesammelte mathematische Werke. Vol I-III. Fricke R., Noether E. & Ore O. (Eds.). Bronx, N.Y: Chelsea.
Dedekind, R. and Weber, H. (1882). Theorie der algebraischen Funktionen einer Veränderlichen. J. Reine Angew. Math. 92, 181–290. In [Dedekind 1969 [1932]], vol. I. 238–351.
Haffner, E. (2017). Strategical Use(s) of Arithmetic in Richard Dedekind and Heinrich Weber's Theorie Der Algebraischen Funktionen Einer Veränderlichen. Historia Mathematica, vol. 44, no. 1, 31–69.
Text-driven variation as a vehicle for generalisation, abstraction, proofs and refutations: an example about tilings and Escher within mathematical education.
ABSTRACT. In this talk we want to investigate in how far we can understand (or rationally reconstruct) how mathematical theory building can be analysed on a text-level approach. This is apparently only a first approximation concerning the heuristics actually used in mathematical practice but delivers already useful insights.
As a first model we show how simple syntactical variation of statements can yield to new propositions to study. We shall show in how far this mechanism can be used in mathematical education to develop a more open, i.e. research oriented experience for participating students.
Apparently not all such variations yield to fruitful fields of study and several of them are most likely not even meaningful. We develop a quasi-evolutionary account to explain why this variational approach can help to develop an understanding how new definitions replace older ones and how mathematicians choose axiomatisations and theories to study.
We shall give a case study within the subject of ‘tilings’. There we begin with the basic question which regular (convex) polygon can be used to construct a tilling of the plane; a question in principle accessible with high school mathematics. Small variations of this problem quickly lead to new sensible fields of study. For example, allowing the combination of different regular (convex) polygons yields to Archimedean tilings of the plane, or introducing the notion of ‘periodicity’ paves the way for questions related to so-called Penrose tilings. It is easy to get from a high school problem to open mathematical research by only introducing a few notions and syntactical variations of proposed questions.
Additionally, we shall offer a toy model of the heuristics used in actual mathematical practice by a model of structuring a mathematical question together with variations of its parts on a syntactical level. This first step is accompanied by a semantic check to avoid category mistakes. By a quasi-evolutionary account, the most fruitful questions get studied, which leads to a development of new mathematical concepts.
Depending on whether there is time left, we show that this model can also be applied to newly emerging fields of mathematical research.
This talk is based on work used for enrichment programs for mathematically gifted children and on observations from working mathematicians.
Interactive Turing-complete logic via game-theoretic semantics
ABSTRACT. We define a simple extension of first-order logic via introducing self-referentiality operators and domain extension quantifiers. The new extension quantifiers allow us to insert new points to model domains and also to modify relations by adding individual tuples. The self-referentiality operators are variables ranging over subformulas of the same formula where they are used, and they can be given a simple interpretation via game-theoretic semantics.
We analyse the conceptual properties of this logic, especially the way it links games and computation in a one-to-one fashion. We prove that this simple extension of first-order logic is Turing-complete in the sense that it exactly captures the expressive power of Turing-machines in the sense of descriptive complexity: for every Turing-machine, there exists an equivalent formula, and vice versa.
We also discuss how this logic can describe classical compass and straightedge constructions of geometry in a natural way. In classical geometry, the mechanisms of modifying constructions are analogous to the model modification steps realizable in the Turing-complete logic. Also the self-referentiality operators lead to recursive processes omnipresent in everyday mathematics.
The logic has a very simple translation to natural language which we also discuss.
ABSTRACT. We analyse so-called pure win-lose coordination games (WLC games) in which all players receive the same playoff, either 1 ("win") or 0 ("lose"), after every round. We assume that the players cannot communicate with each other and thus, in order to reach their common goal, they must make their choices based on rational reasoning only.
We study various principles of rationality that can be applied in these games. We say that a WLC game G is solvable with a principle P if winning G is guaranteed when all players follow P. We observe that there are many natural WLC games which are not unsolvable in a single round by any principle of rationality, but which become solvable in the repeated setting when the game can be played several times until the coordination succeeds.
Based on our analysis on WLC games, we argue that it is very hard to characterize which principles are "purely rational" - in the sense that all rational players should follow such principles in every every WLC game.
Empirical Identity as an Indicator of Theory Choice
ABSTRACT. There are many theories about theory choice in philosophy of science, but no any indicator of scientific theory has been precisely defined, let alone a index system. By the example of empirical identity, I shall show that a range of scientific indicators to decide theory choice can be precisely defined by some basic concepts. I think that these indicators can provide us a better description of the principles of philosophy of science. The certain pursuit of theories' empirical identity and novelty leads the cumulative view of scientific progress; under non-cumulative circumstance, it is totally practicable to judge a theory's empirical identity as well as empirical novelty; empirical identity underdetermines the acceptance of a particular theory. It is possible that all the principles of philosophy of science could be explained again through the system of index of theory choice, thus a more rigorous theory of philosophy of science could be established.
18:30
Cristin Chall (University of South Carolina, United States)
Abandoning Models: When Non-Empirical Theory Assessment Ends
ABSTRACT. The standard model of physics has several conceptual problems (for example, it has no explanation for the three almost identical particle generations) and explanatory gaps (for example, it has no dark matter candidate). These issues have motivated particle physicists to develop and test new models which go beyond the standard model (BSM). Currently, none of the unique predictions of any BSM model have met with any experimental success. The Large Hadron Collider, the most powerful particle accelerator in the world, has reached unprecedented energies, but it has found no compelling evidence for the new physics which researchers are convinced we will eventually find. Despite these experimental failures, physicists continue to pursue various BSM projects. The various groups of BSM models receive different degrees of confidence from physicists, which can be roughly tracked by observing the number of preprints and publications detailing them and the way they are discussed in the summary talks of large physics conferences. From this, we can see that the core ideas of these BSM models persist, even as various concrete predictions stemming from those core ideas fail. This persistence can be explained using classic schemes of theory assessment. For example, once suitably modified to accommodate models alongside theories, Lakatosian research programmes and Laudanian research traditions offer compelling explanations for this phenomena: in Lakatos the hard cores of BSM projects are shielded from contrary experimental results while in Laudan BSM projects can be understood as solving problems, despite their experimental failings.
However, by evoking such explanations, a new problem arises. With the next phase of particle physics uncertain, since there is no consensus on the plans for the next generation of particle accelerators, it is unclear how the various BSM models are properly discriminated without empirical findings to determine success and failure. Non-empirical justifications can be given for the continued pursuit of these kinds of models and theories (those which make predictions we lack the capacity to test), but we must also analyse the non-empirical justifications for abandoning a line of research. I will argue that particle physicists lose confidence and eventually abandon models because of two related factors. First, although a framework like Lakatos's or Laudan's can insulate BSM models from the experimental failures for a time, as the prospects of finding evidence for new physics at the LHC diminish, there is an equivalent depreciation of confidence in these models as a consequence of this lack of fruitfulness. Second, changes in the degree of support for the problems and solutions motivating BSM models cause similar changes in support for the models (for instance, with less confidence in naturalness as a guiding principle, models built to be more natural fall out of favour). These two factors lead to increasing disfavour in these models, and eventually their abandonment. Once we have established this non-empirical reasoning behind giving up models and theories, we can add it to our non-empirical assessment criteria at play in cases where theory extends beyond experiment.
19:00
In-Rae Cho (Seoul National University, South Korea)
Toward a Coevolutionary Model of Scientific Change
ABSTRACT. In this work, I attempt to develop a coevolutionary model of scientific change, which affords a more balanced view on both the continuous and discontinuous aspects of scientific change. Supposing that scientific inquiry is typically goal-directed, I’m led to propose that scientific goals, methods and theories constitute the main components of scientific inquiry, and to investigate the relationships among these components and their changing patterns. In doing so, first of all, I identify explanatory power and empirical adequacy as primary goals of science. But, facing what I call the problem of historical contingency according to which those primary scientific goals could not be justified because they are historically contingent, I explore the possibility of evaluating scientific goals and suggest that several well-motivated measures of evaluating scientific goals allow us to alleviate the problem of historical contingency. Then I try to bring out the major features of how those main components of science are related to each other. One major feature is that they mutually constrain each other, and thus each main component operates as a selective force on the other components. Another major feature is that the main components of science are induced to change reciprocally, but with certain intervals. Considering these features together, I suggest that scientific change is evolutionary (rather than revolutionary), as well as coevolutionary. Further, I claim that there are other important features, which deserve our serious attention. They are the modes and tempos of changes in the main components of scientific inquiry. Firstly, the modes of changes in the main components of scientific inquiry are not homogeneous. That is to say, unlike what has happened in scientific methods and theories throughout the history of scientific inquiry, what I take as primary goals of science seem to have experienced a sort of strong convergence. Secondly, the tempos of changes in the main components of scientific inquiry also are not quite homogeneous. Particularly, the tempo of change in primary goals of science seems to have been much slower than those in method or theory. So I come to conclude that, despite mutually constraining relationships among these main components, what really anchors scientific activities are goals rather than methods or theories. Finally I argue that this coevolutionary model of scientific change does not yield to what I call the problems of circularity and scientific progress. The problem of circularity is that the evaluation process occurring in the coevolutionary model of scientific change is structurally circular. I argue, however, that the changes resulting from evaluating one of the main components of scientific inquiry under the constraints of other components are not circular viciously, but usually self-corrective. Further, the problem of scientific progress results from the observation that my coevolutionary model seems quite similar to Laudan’s reticulated model of scientific change. While admitting that there exist significant similarities between the two models of scientific change, I claim that the mode of scientific progress in my coevolutionary model is not transient as what happens in Laudan’s reticulated model but transitive.
Dazhou Wang (University of Chinese Academy of Sciences, China)
A Phenomenological Analysis of Technological Innovations
ABSTRACT. Inspired by Martin Heidegger, the phenomenological analysis has become one of the mainstream approaches in philosophy of technology, and promoted hugely our understanding of the place of technology in the society. So far, however, such studies has been characterized by static nature to a large extent, and there is virtually no phenomenological analysis of technological innovations. For instance, the central questions in post-phenomenologist Don Ihde’s investigations are what role technology plays in everyday human experience and how technological artifacts affect people’s existence and their relation with the world. According to him, there are four typical relations between human and artifacts, i.e., embodiment relation, hermeneutic relation, alterity relation and background relation. These relations are those between the "given" user and the "given" artifact under the "given" life world, taking no account of the creation of artifacts and dynamic interaction among the user, new artifacts and the world. It is true that Heidegger himself discussed the "present-at-hand” state which could lead to further "thematic investigation", a very beginning of the innovative practices. In this way, his analysis certainly suggests the emergence of innovative practices. However, his focus was mainly put on the structure of the routine practice, and the very essence of innovative practice was largely put aside.
In this paper, the author attempts to develop a phenomenology of technological innovations to make up for the above shortcomings. It is presented that, ontologically, technological innovations drive and lie in the existential cycle of the human beings. Technological innovations stem from the rupture of the lifeworld and the "circular economy". With such ruptures, all the basic elements of the social practices, i.e. the human, the natural, the institutional and the technological, would be brought to light, and the innovation is the very process of transactions among various stakeholders who dispute each other and try to resolve these disputes. In this process, through a series of mangles and experiments, technological problems will be firstly defined, then gradually be resolved, eventually a new collection of human and nonhuman formed and the lifeworld renewed. Therefore, if the technology is the root of human existence, the technological innovations, based on the related traditions as the ready-made things, is the dangerously explorative journey to create new possibilities for human existence. The essence of human existence lies in technology (being) as well as in innovations (becoming), especially technological innovations. To break the solidification of society, the fundamental means is to conduct technological innovations. So it is absurd to simply deny technologies and innovations and the only choice is to do technological innovations proactively and responsibly. Responsible innovation means not so much letting more prescient and responsible people into the innovation process to supervise the less prescient and irresponsible innovators, but rather to help broaden the innovation vision and to share such increasingly weighty responsibilities through public participation and equal dialogue.
18:30
Paulina Wiejak (Universita Politecnica delle Marche, Italy)
On Engineering Design. A Philosophical Inquiry
ABSTRACT. In my work, I would like to explore the area of engineering design through the philosopher’s glass. First, looking at the whole process of engineering design - as described by Pahl and Beitz [4] - as a perfect combination of ancient techne and episteme [7], what is an art or craft and what is theoretical knowledge. In this part, I will try to build a bridge over Aristotelian thought and contemporary discourse in engineering design. Second, focusing on the so-called conceptual and embodiment design I would like to explore the notion of representation. In particular, I would like to use the work of Roman Ingarden on the works of art and use his analysis to name and interpret elements of, what engineers often call, 3D models or 3D representations. Third, I would like to recognize the usefulness of the idea of ‘directions of fit’[1,6] in the area of manufacturing, like it is presented in [2], and try to apply this idea to the area of Computer-Aided Design.
Bibliography
John Searle.Intentionality. Oxford University Press, 1983.
Michael Poznic. “Modeling Organs with Organs on Chips: Scientific Representation and Engineering Design as Modeling Relations”. In: Philosophy and Technology 29.4 (2016), pp. 357–371.
Tarja Knuuttila. “Modelling and Representing: An Artefactual Approach to Model-Based Representation”. In: Studies in History and Philosophy of Science Part A42.2 (2011), pp. 262–271.
Gerhard Pahl; W. Beitz; J ̈org Feldhusen; Karl-Heinrich Grote. Engineering Design. A Systematic Approach. Springer-Verlag London, 2007.
Mieke Boon and Tarja Knuuttila. “Models as Epistemic Tools in Engineering Sciences”. In: Philosophy of Technology and Engineering Sciences. Ed.by Anthonie Meijers. Handbook of the Philosophy of Science. Amsterdam: North-Holland, 2009, pp. 693–726.
G. E. M. Anscombe.Intention. Harvard University Press, 1957.
Parry, Richard, "Episteme and Techne", The Stanford Encyclopedia of Philosophy (Fall 2014 Edition), Edward N. Zalta (ed.), URL =.
19:00
Aleksandr Fursov (M.V. Lomonosov Moscow State Univercity, Russia)
The anthropic technological principle
ABSTRACT. The idea that humankind is something transient is widely discussed in the modern philosophy. The nightmare of both humanistic and trans-humanistic philosophy is that humanity will inevitably face some insuperable limits for further development, or even worse, can appear an evolutionary dead-and. In public consciousness this idea is often represented by the scenarios like artificial intelligence will "conquer the world".
K. Popper sais that there is only one step from amoeba to Einstein. He implies concrete methodological significance in this rhetorical statement. But this significance can be ontological. Approximately 3 billion years ago cyanobacterium started to produce oxygen. They just needed energy, so they received it from the water and filled the Earth atmosphere with oxygen. These bacterium didn't have the "aim" to produce oxygen, they satisfied their needs.
Nowadays homo sapiens produces the wide range of technical devices, including artificial intelligence, in order to satisfy its needs in communication, safety, energy supply and political power. Like in the case with cyanobacterium oxygen production, the technical development is not the aim, but just the instrument for humanity. If we continue the analogy between cyanobacterium and homo sapiens from the "Universe point of view", we can suppose, that technical sphere as a co-product of human activity may be the precondition of a new form of being genesis, like oxygen atmosphere was the precondition of the more complex aerobic organisms genesis. But in this case the transience of humankind must be fixed ontologically.
So, we can develop the anthropic technological principle if we attempt to save the special ontological status of homo sapiens. It claims that technological development must be controlled in order to prevent complete human elimination in principle. Every technical device must be constructed in the way that makes impossible it's full-fledged functioning, including self-reproduction, without human sanction. This principle is based on the strong supposition that we really can control the technical development.
The anthropic technological principle, in contrast with the anthropic cosmological principle, is pure normative. We can't use it in the explanatory scheme. It should be understand as the evolutionary defensive mechanism of homo sapiens.
Modeling Biological Possibilities in Multiple Modalities
ABSTRACT. A noticeable feature of contemporary modeling practice is the employment of multiple models in the study of same phenomena. Following Levins’ (1966) original insight, philosophers of science have studied multiple modeling through the notions of triangulation and robustness. Among the different philosophical notions of robustness one can discern, first, those that focus on robust results achieved by triangulating independent epistemic means, and, second, those that target the variation of the assumptions of a group of related mathematical models (Knuuttila and Loettgers 2011). The discussion of modeling has concentrated on the latter kind of robustness as the models being triangulated are typically not independent (c.f. Orzak and Sober 1993). Yet, the problem of robust results derived from related mathematical models sharing a common “causal” core is that they may all be prone to the same systematic error (Wimsatt 2007). One compelling strategy to safeguard against systematic errors is the construction of related models in different material modalities. This paper considers such multiple modeling practices through the cases of synthetic genetic circuits and minimal cells. While the various incarnations of these model systems are not independent by their design, they utilize independent media – mathematical, digital and material – thus mitigating the errors prone to using only one kind of modeling framework. Moreover, the combination of a related model design and independent representational media, also tackles the worries concerning inconsistent and discordant evidence (e.g. Stegenga 2009). The cases studied highlight also another important dimension of multiple modeling: the study of what is possible. While much of scientific modeling can be understood as an inquiry into various kinds of possibilities, the research practice of synthetic biology takes modeling beyond mere theoretical conceivability toward material actualizability.
References:
Knuuttila, T., & Loettgers, A. (2011). Causal isolation robustness analysis: The combinatorial strategy of circadian clock research. Biology and Philosophy, 26(5), 773-791.
Levins, R. (1966). The strategy of model building in population biology. American Scientist, 54(4), 421-431.
Orzack, S. H., & Sober, E. (1993). A Critical assessment of Levins’s The strategy of model building in population biology (1966). The Quarterly Review of Biology, 68(4), 533-546.
Stegenga, J. (2009). Robustness, Discordance and Relevance. Philosophy of Science 76, 650-661.
Wimsatt, W.C. (2007). Re-engineering philosophy for limited beings: Approximations to reality. Harvard University Press, Cambridge.
Jens Kipper (University of Rochester, United States)
Intuition, Intelligence, Data Compression
ABSTRACT. The main goal of this paper is to argue that data compression is a necessary condition for intelligence. One key motivation for this proposal stems from a paradox about intuition and intelligence. For the purposes of this paper, it will be useful to consider playing board games—such as chess and Go—as a paradigm of problem solving and cognition, and computer programs as a model of human cognition. I first describe the basic components of computer programs that play board games, namely value functions and search functions. A search function performs a lookahead search by calculating possible game continuations from a given board position. Since many games are too complex to be exhausted by search, game-playing programs also need value functions, which provide a static evaluation of a given position based on certain criteria (for example, space, mobility, or material). As I argue, value functions both play the same role as intuition in humans (which roughly corresponds to what is often called 'System 1 processing') and work in essentially the same way. Increasingly sophisticated value functions take more features of a given position into account, which allows them to provide more accurate estimates about game outcomes, because they can take into account more relevant information. The limit of such increasingly sophisticated value functions is a function that takes all of the features of a given position into account and determines as many equivalence classes of positions as there are possible positions, thereby achieving perfect accuracy. Such a function is just a complete database that stores the game-theoretic values of all possible positions. Following Ned Block (1981), there is widespread consensus that a system that solves a problem by consulting a database—or, as in Block’s example, a 'lookup table'—does not exhibit intelligence.
This raises our paradox, since reliance on intuition—both inside and outside the domain of board games—is usually considered as manifesting intelligence, whereas usage of a lookup table is not. I therefore introduce another condition for intelligence that is related to data compression. According to my account, for a judgment or action to be intelligent, it has to be based on a process that surpasses both a certain threshold of accuracy and a certain threshold of compression. In developing this account, I draw on complexity measures from algorithmic complexity theory (e.g. Kolmogorov 1963). My proposal allows that reliance on a lookup table—even if it is perfectly accurate—can be nonintelligent, while retaining that reliance on intuition can be highly intelligent. As I explain, and as has been pointed out by researchers in computer science (e.g. Hernández-Orallo 2017), there are strong theoretical reasons to assume that cognition and intelligence involve data compression. Moreover, my account also captures a crucial empirical constraint. This is because all agents with limited resources that are able to solve complex problems—and hence, all cognitive systems—need to compress data.
References
Block, Ned (1981). Psychologism and Behaviorism. Philosophical Review 90, 5–43.
Hernández-Orallo, José (2017). The Measure of All Minds. CUP.
Kolmogorov, Andrey (1963). On Tables of Random Numbers. Sankhyā Ser. A. 25, 369–375.
ABSTRACT. How to establish if a sentence, a statement, or a theory are true has become a problem of public relevance. To defend that scientific knowledge is not a privileged way for understanding the reality and, therefore, that there are not good reasons for using science as the basis for committing some decisions, has grown a widespread argument. Even prevalent relativistic conceptions about science, like Fuller’s, defend the acceptance of the post-truth: “a post-truth world is the inevitable outcome of greater epistemic democracy. (…) once the instruments of knowledge production are made generally available—and they have been shown to work—they will end up working for anyone with access to them. This in turn will remove the relatively esoteric and hierarchical basis on which knowledge has traditionally acted as a force for stability and often domination.” (Fuller, 2016: 2-3). However, epistemic democracy does not necessary lead to the acceptance of that position. As the editor of Social Studies of Science has pointed out: “Embracing epistemic democratization does not mean a wholesale cheapening of technoscientific knowledge in the process. (...) the construction of knowledge (...) requires infrastructure, effort, ingenuity and validation structures.” (Sismondo, 2016: 3). Post-truth, defined as “what I want to be true is true in a post-truth culture (Wilber, 2017, p. 25), defends a voluntaristic notion of truth, and there is nothing democratic in individualistic and whimsical decisions about the truthfulness of a statement. For radical relativism scientific consensus is reached by the same kind of mechanisms as in other social institutions, i.e. by networks that manufacture the “facts” using political negotiation, or by other ways of domination.
However, the notion of truth that relativists (for instance Zackariasson, 2018: 3) are attacking is actually a straw man: the “God’s eye point of view” that very few among philosophers or scientists, defend any more. We suggest that an alternative to post-truth arguments, that at the same time suggests mechanisms for developing a real epistemic democracy, is the notion of truth from pragmatism. This could seem controversial, if we have in mind the debunked and popularized version of pragmatism —the usefulness of an assertion is the only thing that counts in favour of it being true—. Nevertheless, whether among classic pragmatists as Dewey or neo-pragmatists (e.g. Kitcher, 2001), the construction of scientific knowledge, with all its limitations, is the best way for reaching if not the Truth, at least partial or rectifiable, but reliable and well-built knowledge.
REFERENCES
Fuller, S. (2016). Embrace the inner fox: Post-truth as the STS symmetry principle universalized. Social Epistemology Review and Reply Collective.
Kitcher, P. (2001). Science, democracy and truth. Oxford U.P.
Sismondo, S. (2016). Post-truth? Social Studies of Science Vol. 47(1) 3–6.
Wilber, K. (2017). Trump and a Post-Truth World. Shamballah.
Zackariasson, U. (2018). Introduction: Engaging Relativism and Post-Truth. In M. Stenmark et al. (eds.), Relativism and Post-Truth in Contemporary Society. Springer.
18:30
Nataliia Viatkina (Institute of Philosophy of National Academy of Sciences of Ukraine, Ukraine)
Deference as Analytic Technique and Pragmatic Process
ABSTRACT. The goal of the paper is to consider what is determining the deference in cases with inaccurate and untested knowledge. Deferential concept is a sort of concept which people use a public word without fully understanding what it typically entertains (Recanati, F. "Modes Of
Presentation: Perceptual vs. Deferential", 2001). What happens on the level of the everyday use of language?
There is a link between social stimulations to certain use of words, social learning, different "encouragements for objectivity", leading to correcting of everything that is not consistent with the generally accepted use of words and meanings (Quine. Word and Object,1960) and the deference, revealing the chain of these uses, distortions, refinements, leading to some problematic beginning of use of the term.
When a philosopher is performing a conceptual analysis, and affirming the causal relationship does not care about analysis of the cause, but relies on the specialists, we say such a philosopher applies the analytic technique of 'Grice's deference' (Cooper, W. E. "Gricean Deference".1976). This technique allows the philosopher to be free from any responsibility for explanation the nature of the causes. From this point of view, the philosopher at a certain point in her analysis defers to a specialist in the relevant science, competent to talk about the causal relationships.'Deferentially' means relying on someone's thought, opinion, knowledge.
The wellknown post-truth phenomena is interpreted as a result of deferential attitude to information, knowledge and various data concerning reality.
Along with linguistic and epistemic deference and their forms of default and intentional deference (Woodfield, A. Reference and Deference, 2000) (Stojanovic, De Brabanter, Fernandez, Nicolas. Deferential Utterances, 2005) the so called "backfire effect" will be considered. "Backfire effect" named the phenomenon pertaining to the cases when "misinformed people, when confronted with mistakes, cling even more fiercely to their incorrect beliefs (Tristan Bridges, Why People Are So Averse to Facts. "The Society Pages". http:// thesocietypages. org).
There is a problem, within what approach could be correllated the instances of falsity-by-misunderstanding and cases when speaker openly prefers to use expressions like it makes someone else from the nearest linguistic community, following the custom or authority.
Being a pragmatic process, deference is responsible for the lack of transparency in meta-representations (Recanati, op.cit.). So, what determines deference lies in basic concepts of the theory of opacity, in meta-representations, and in mechanism of the deference in connection with the opacity and meta-representations. The last, but not least, in this sequence is the use of the mechanism of deference to problems with imperfect mastery and unconsciously deferential thoughts (Fernandez, N.V. Deferential Concepts and Opacity).
Karl Popper, prehistoric technology and cognitive evolution
ABSTRACT. More than a century and a half after Darwin it is almost a commonplace that human species is the outcome of an evolutionary process going back to the origin of life. Just as the human brain-body has been shaped by evolutionary pressures operating in our ancestral past, the biological structures and mechanisms relating to human cognitive aspects might also have been selected for and it probably took millions of years for the distinctive cognitive faculties to evolve.
One way to find possible evolutionary explanations of these cognitive abilities is to explore the domain of prehistoric stone tool technology. Scholarly interest in the evolutionary impacts of lower Paleolithic stone tool making (and use) on the initial emergence of hominin cognitive behaviour is growing steadily. The most controversial questions include, for example, how in the evolutionary process did cognitive abilities (or consciousness) emerge in a world hitherto purely physical in its attributes? Or, what role did these early stone tools play in the evolution of human (or hominin) cognitive system? Stone tools are typically described in the archaeological literature as mere products of hominin cognition. Evidently the causal arrow assumed in this standard perception is one way− from cognition to tools or artefacts.
Since late 1990s several interesting approaches to cognition have come up challenging this simplistic one-way-causal-arrow view. Cognitive processes are increasingly interpreted as not just something happening entirely inside our head but as extended and distributed processes (e.g., Clark & Chalmers 1998; Hutchins 2008). Interesting is to note, Karl Popper’s theory of the emergence of consciousness (or cognition) posed a serious challenge to the one-way-causal-arrow view decades before the appearance of this beyond-the-body conception of human cognition. Reinterpreting Darwin’s views on the biological function of mental phenomena Popper’s psycho-physical interactionist (though not a dualist interactionist) theory (Popper and Eccles 1977; Popper 1978) not only questioned the mere epiphenomenal status of tools or artefacts but placed great emphasis on the role of such extra-somatic environmental resources in transforming and augmenting human cognitive capacities. What’s more, Popper’s conjectures about the emergence of consciousness seem strongly convergent with current experimental-archaeological research on early stone tool making and cognitive evolution (e.g., Jeffares 2010; Malafouris 2013).
The present paper seeks to synthesize the critical insights of Popper with those of the experimental-archaeologists to see if some fresh light could be thrown on the problem of hominin cognitive evolution.
References:
1. Clark, A. & Chalmers D. 1998. The Extended Mind. Analysis 58 (1): 7-19.
2. Hutchins, E. 2008. The Role of Cultural Practices in the Emergence of Modern Human Intelligence. Philosophical Transactions of the Royal Society 363: 2011-2019.
3. Jeffares, B. 2010. The Co-evolution of Tools and Minds: Cognition and Material Culture in the Hominin Lineage. Phenomenology and Cognitive Science 09: 503-520.
4. Malafouris, L. 2013. How Things Shape the Mind. Cambridge Mass: The MIT Press.
5. Popper, K. R. & Eccles, J. C. 1977. The Self and Its Brain. Berlin: Springer-Verlag.
6. Popper, K. R. 1978. Natural Selection and the Emergence of the Mind. Dialectica 32 (3-4): 339-355.
CANCELLED: Peirce on the Logic of Science – Induction and Hypothesis
ABSTRACT. In his Harvard Lectures on the Logic of Science from 1865 Peirce for the first time presented his logical theory of induction and hypothesis as the two fundamental forms of scientific reasoning. His study of the logic of science seems to have been initiated by the claim of William Hamilton and Henry L. Mansel that “material inferences”, which Peirce calls a posteriori and inductive inferences, are to be considered as “extralogical”. In consequence they regarded the principles of science, which Kant maintained to be valid a priori, to be as axioms “not logically proved”. In opposition to this view Peirce in his Harvard Lectures seeks to establish first, that deduction, induction and hypothesis are three irreducible forms of reasoning which can be analysed with reference to Aristotle’s three figures of syllogism as the inference of a result, the inference of a rule and the inference of a case respectively and second, with reference to Kant’s doctrine that a synthetic “inference is involved in every cognition” and Whewell’s distinction of fact and theory, that “every elementary conception implies hypothesis and every judgment induction” and that therefore we can never compare theory with facts but only one theory with another one and that consequently the universal principles of science, for instance the principle of causation, as conditions of every experience, understood as a theory inferred from facts, can never be falsified and are therefore valid a priori. Peirce develops his position examining the theories of induction of Aristotle, Bacon, Kant, Whewell, Comte, Mill and Boole. The paper will first reconstruct the main points of Peirce’s discussion of the theories of these authors and give a critical account of his arguments and motives and second analyse his syllogistic and transcendental solution of the problem of a logical theory of scientific reasoning. This second part will be supplemented by an account of the significance of later revisions of the logic of science by Peirce in his Lowell Lectures (1866), the American Academy Series (1867), the Cognition Series (1868/69), the Illustrations of the Logic of Science (1877/78) and How to Reason (1893). The main focus of the paper will be Peirce’s reformulation of Kant’s conception of transcendental logic as a logic of science and therefore of synthetic reasoning with reference to a reinterpretation of Aristotle’s Syllogistic.
ABSTRACT. I document usage and meaning of the phrase 'hypothesis' with original quotes from the earliest origins to contemporary sources. My study is based on some 100 authors from all fields of knowledge and academic cultures, including first hand reflections by scientists, from which I shall present a selection. I focus on explicit methodological statements by these authors and do not investigate concrete hypotheses.
The interpretations of the term hypothesis developed over time. Conflict and disagreement often was the result from not speaking a common language. Philologically no single definition captures all its meaning, and usage in fact often is contradictory. The purposes of this exercise are several: (1) To give meaning to expressions such as more-than-hypothesis, or pejoratives like mere-hypothesis. (2) To provide a lexical overview of the term. (3) To elaborate on the different kinds of epistemes. (4) To classify hypotheses (phenomenological postulates, models, instruments, and imaginary cases). (5) To trace the origins of evidence-based hypothetico-deductive epistemology (Bellarmine - Du Châtelet - Whewell - Peirce - Einstein – Popper). (6) To demarcate the term from several related ones (theory, thesis, principle, fact). Notwithstanding personal preferences, “hypothesis” shall remain a term with multiple even mutually exclusive connotations, what counts is giving exemplars of use (Kuhn!).
For purposes of illustration let me quote from a table of my finished manuscript with the principal interpretations of the term hypothetical: not demonstrated, unproven but credible, capable of proof if asked for one, presumption; an element of a theory, système, a subordinate thesis, proposal, assumption, paradigm, presupposition; a kind of syllogism (if – then), conditional certitude; a statement expressing diverse degrees of probability (morally certain, probable, fallible, falsifiable, reformable, tentative, provisional, unconsolidated, subjective, speculative, fictitious, imaginary, illegitimate assumption, unspecified); pejorative uses, diminutive; ex suppositione – to safe the phenomena (instrumentalism), mathematical contrivances, ex hypothesi – why?, a model, mutual base of a discourse, reconciles reason with experience; suppositio – postulate, rule, prediction, evidence-based hypothetico-deductive, that what is abducted, guess; a third category besides ideas and reality, a blueprint for change, free inventions of the mind. Hypothesis, supposition and conjecture are roughly synonyms.
A full length manuscript on the subject of my conference presentation is available for inspection by those interested.
Incompleteness-based formal models for the epistemology of complex systems
ABSTRACT. The thesis I intend to argue is that formal approaches to epistemology deriving from Goedel incompleteness theorem, as developed for instance by Chaitin, Doria and da Costa (see [3]), even if originally conceived to solve decision problems in physical and social sciences (e.g. the decision problem for chaotic systems), could also be used to adress problems regarding consistency and incompleteness of sets of beliefs, and to define formal models for epistemology of complex systems and for the “classical” systemic-relational epistemology of psychology, such as Gregory Bateson’s epistemology (see [2]) and Piaget’s Genetic Epistemology (see for instance [4]). More specifically, following systemic epistemology of psychology, there are of two different classes of learning and change processes for cognitive sytems: a “quantitative learning” (the cognitive system adquires information without changing the rules of reasoning) and a “qualitative” learning (an adaptation process which leads the system to a re-organization). Therefore, as in Incompleteness theorems the emergence of an undecidable sentence in a logical formal system lead to the definition of a chain of formal systems, obtained by adjoining as axioms propositions that are undecidable at previous levels, in the same way, the emergence of an undecidable situation for a cognitive system could lead to the emergence of “new ways of thinking”. Thus, a (systemic) process of change (process of “deuterolearning”), could be interpreted as a process that leads the cognitive organization of the subject to a different level of complexity by the creation of a hierarchy of abstract relations between concepts, or by the creation of new sets of rules of reasoning and behaving (where the process of learning is represented by a sequence of learning-stages, e.g. by sequences of type-theoretically ordered sets, representing information/proposition and rules of reasoning/rules of inference).
I will propose two formal models for qualitative change processes in cognitive systems and complex systems:
• The first, form set set theory, is based on Barwise’s notion of partial model and model of Liar-like sentences (see [1]).
• The second, from proof theory and algebraic logic, is based on the idea that a psychological-change process (the development on new epistemic strategies), is a process starting from a cognitive state s_0 and arriving to a cognitive state s_n, possibly assuming intermediate cognitive states s_1 , . . . , s_(n−1) : developing some researches contained in [5] and [6], I will propose a model of these processes based on the notion of paraconsistent consequence operator.
I will show that these two different formal models are deeply connected and mutually translatable.
References
. [1] Barwise, J. and Moss L., Vicious circles: on the mathematics of non-well- founded phenomena, CSLI Lectures Notes, 60, Stanford, 1993.
. [2] Bateson G., Steps to an ecology of mind, Paladin Book, New York, 1972.
. [3] Chaitin, G. F.A. Doria, N.C. da Costa, Goedel’s Way: Exploits into an undecidable world, CRC Press, Boca Raton, 2011
. [4] Piaget, J. and Garcia, R., Toward a logic of meaning, Lawrence Erlbaum Associates, Hillsdale, 1991.
. [5] Van Lambalgen, M. and Hamm, F., The Proper Treatment of Events, Black- well, London, 2004.
. [6] Van Lambalgen, M. and Stenning, K., Human reasoning and Cognitive Science, MIT Press, Cambridge, 2008.
18:30
Maria Dimarogkona (National Technical University of Athens, Greece) Petros Stefaneas (National Technical University of Athens, Greece)
A Meta-Logical Framework for Philosophy of Science
ABSTRACT. In the meta-theoretic study of science we can observe today a tendency towards logical abstraction based on the use of abstract model theory [1], where logical abstraction is understood as independence from any specific logical system. David Pearce’s idea of an abstract semantic system in 1985 [2] was characterised by this tendency, and so was the idea of translation between semantic systems, which is directly linked to reduction between theories [3]. A further step towards logical abstraction was the categorical approach to scientific theories suggested by Halvorson and Tsementzis [4]. Following the same direction we argue for the use of institution theory - a categorical variant of abstract model theory developed in computer science [5] - as a logico-mathematical modeling tool in formal philosophy of science. Institutions offer the highest level of abstraction currently available: a powerful meta-theory formalising a logical system relative to a whole category of signatures, or vocabularies, while subscribing to an abstract Tarskian understanding of truth (truth is invariant under change of notation). In this way it achieves maximum language-independence. A theory is always defined over some institution in this setting, and we also define the category of all theories over any institution I. Appropriate functors allow for the translation of a theory over I to a corresponding theory over J. Thus we get maximum logic-independence, while the theory remains at all times yoked to some particular logic and vocabulary.
To clarify our point we present an institutional approach to the resurgent debate between supporters of the syntactic and the semantic view of scientific theory structure, which currently focuses on theoretical equivalence. If the two views are formalized using institutions, it can be proven that the syntactic and the (liberal) semantic categories of theories are equivalent [6][7]. This formal proof supports the philosophical claim that the liberal semantic view of theories is no real alternative to the syntactic view; a claim which is commonly made - or assumed to be true. But it can also - as a meta-logical equivalence - support another view, namely that there is no real tension between the two approaches, provided there is an indispensable semantic component in the syntactic account.
[1] Boddy Vos (2017). Abstract Model Theory for Logical Metascience. Master’s Thesis. Utrecht University.
[2] Pearce David (1985). Translation, Reduction and Equivalence: Some Topics in Inter-theory Relations. Frankfurt: Verlag Peter Lang GmbH.
[3] Pearce David, and Veikko Rantala (1983a). New Foundations for Metascience Synthese 56(1): pp. 1–26.
[4] Halvorson Hans, and Tsementzis Dimitris (2016). Categories of scientific theories. Preprint. PhilSci Archive: http://philsciarchive.pitt.edu/11923/2/Cats.Sci.Theo.pdf
[5] Goguen Joseph, and Burstall Rod (1992). Institutions: abstract model theory for specification and programming. J. ACM 39(1), pp. 95-146.
[6] Angius Nicola, Dimarogkona Maria, and Stefaneas Petros (2015). Building and Integrating Semantic Theories over Institutions. Worksop Thales Algebraic Modeling of Topological and Computational Structures and Applications, pp.363-374. Springer
[7] Dimarogkona Maria, Stefaneas Petros, and Angus Nicola. Syntactic and Semantic Theories in Abstract Model Theory. In progress.
19:00
Vladimir Lobovikov (Ural Federal University, Laboratory for Applied System Research, Russia)
A formal axiomatic epistemology theory and the controversy between Otto Neurath and Karl Popper about philosophy of science
ABSTRACT. The controversy mentioned in the title had been related exclusively to the science understood as empirical cognition of the world as totality of facts. Obviously, verifiability of knowledge implies that it is scientific one. Popper developed an alternative to the verification-ism, namely, the falsification-ism emphasizing that falsifiability of knowledge implies that it is scientific one. Neurath criticized Popper for being fixed exclusively on falsifiability of knowledge as criterion of its scientific-ness. Neurath insisted that there was a variety of qualitatively different forms of empirical knowledge, and this variety was not reducible to falsifiable knowledge. In my opinion the discrepancy between Popper and Neurath philosophies of science is well-modeled by the axiomatic epistemology theory as according to this theory it is possible that knowledge is neither verifiable nor falsifiable but empirical one. The notion “empirical knowledge” is precisely defined by the axiom system considered, for instance, in [XXXXXXXXX XXXX]. The symbols Kq, Aq, Eq in stand, respectively, for: “agent knows that q”; “agent a-priori knows that q”; “agent has experience knowledge that q”. In the epistemic modality “agent empirically knows that q” is defined by the axiom 4 given below. In this axiom: Sq represents the verifiability principle; q represents the falsifiability one; (q Pq) represents an alternative meant by Neurath but missed by Popper. The symbol Pq in stands for “it is provable that q”. Thus, according to the theorems by Gödel, arithmetic-as-a-whole is an empirical knowledge. The theory is consistent. A proof of its consistency is the following. Let in the theory the meta-symbols and be substituted by the object-one q. Also let q be substituted by Pq. In this case the axiom-schemes of are represented by the following axioms, respectively.
1: Aq (q q).
2: Aq ((q q) (q q)).
3: Aq (Kq & (q & Sq & (q Pq))).
4: Eq (Kq & (q Sq (q Pq))).
The interpretation is defined as follows.
ω = ω for any formulae ω. (ω π) = (ω π) for any formulae ω and π, and for any classical logic binary connective .
q = false. Aq = false. Kq = true. Eq = true. q = true. Sq = true.
(q q) = true. Pq = true. (q Pq) = false (according to Gödel theorems). In all the axioms of are true. Hence, is a model for . Hence is consistent.
Natural analogy: A Hessean Approach to Analogical Reasoning in Theorizing
ABSTRACT. This paper aims to explore the use of analogy in scientific theorizing via Mary Hesse’s original understanding of analogical reasoning. The approach is thus Hessean. I revise Hesse’s interpretation and symbolic schema of analogy to develop a new framework that can be used to analyze the structure and cognitive process of analogy.
I take Hessean approach for two main reasons: (1) By a preliminary comparison with the probabilistic, the cognitive, and the computational approaches, I think that Hessean approach is more suitable for investigating the use of analogical reasoning in theorizing than are other approaches. I will defend this claim in terms of comparing my approach with the cognitive approach such as structural mapping theory (SMT). (2) Hesse’s approach is more natural than others are. The adjective “natural” is understood in the following sense: Relative to SMT, Hessean approach preserves “pretheoretic similarity” in our natural languages as a necessary element of analogy. Moreover, Hesse’s symbolic schema of “dyadic items” best reflects the comparative and contrastive characteristic of the analogical reasoning naturally emerged in our minds. Therefore, I would like to call the framework developed via Hessean approach “natural analogy” – a concept similar to “natural deduction.”
My framework of natural analogy revises Hesse’s original in the following two ways: (1) Hesse follows logical empiricists’ distinction between the formal type and the material type of analogy. In this paper, I will argue that analogy, in the field of scientific theorizing, is both formal and material. To mark this revision of Hesse’s framework, I will use a new contrast between “structural/theoretical” and “conceptual/pretheoretical” as two aspects or elements of analogical reasoning to replace the old dichotomy of “formal” and “material” types. The meanings of the new pair of concepts will be elaborated. As a consequence, my framework does not only consider the conceptual/pretheoretical similarities, but also tracks the structural correspondence between two analogues. (2) I modify and expand Hesse’s original symbolic schema of dyadic items to build up three new schemas and use them to analyze the role of analogical reasoning plays in scientific theorizing in historical cases. Two symbolic schemas for representing the structure of analogy and the third schema for simulating the cognitive operations of analogical reasoning have been proposed. Those schemas we introduce step by step lead us to suggest that the use of analogy in theorizing can be analyzed into four cognitive operations: abstraction, projection, incorporation, and fitting. I will use the scheme to analyze the process in which Coulomb’s law was proposed by analogizing to Newton’s law of gravitation, a famous case which is usually deemed as an exemplar of formal analogy, to illustrate the schemas of natural analogy.
ABSTRACT. Rethinking the transformation of classical science in technoscience: ontological, epistemological and institutional shifts
The key tendency in the development of science in the present-date society is that the scientific knowledge produced in Academia by scientists and academic society is losing its privileged position; moreover, the science as an institution is losing its monopoly to the production of knowledge that is considered powerful, valuable and effective. This process of deep transformation has been partially reflected in such concepts as technoscience, post-academic science, transdisciplinarity and can be found in such practices as deprofessionalization of scientific knowledge, civil science (expertise), informal science exchange in social media. In our presentation we aim to put for further consideration some ideas discussing not so much causes but purposes and effects of this transformation – epistemological, institutional and social. In particular we will focus on new subject (entity, person) of knowledge and its position in society and on the crucial change in the mechanisms of the scientific knowledge production that may lead to replacement of scientific knowledge by technologies (complex machines, techniques, skills, tools, methods) and innovations.
The key theses we will develop in our presentation:
1. Basically, the concepts of technoscience, post-academic science and transdisciplinarity register and show various aspects of science transformation into something new, which we continue to call ‘‘science’’ only due to institutional and cultural reasons. Indeed, science is a project of the Modern Time, which was artificially extended by historization of scientific rationality; and apparently it has come to its end as any historical formation. It seems that ‘‘technoscience’’ is probably the best general term (though subject to a number of restrictions) to denote what we still call ‘‘science‘‘ but what, in fact, is not science anymore, however it is consistently taking the place /position of science in the society.
2. The term ‘‘technoscience’’ emphasizes an entanglement of science and technology and it was mainly raised to distinguish a ‘‘new’’ type of scientific activities from ‘‘traditional’’ ones with a different epistemic interest producing different objects with a different ontological status.
Yet, for us it is important, that the concept enables us to capture the drastic changes in the means of production of knowledge and its organization. We claim that scientific knowledge is gradually turning into a component of innovative development and this means that scientific knowledge and academic science as an institution are becoming conformed to the principles and rules of functioning of other social spheres – economics, finance, and industry. The governance, business and society require the science to produce not the true / veritable knowledge to understand and explain the (real) world but give information and efficient ‘‘knowledge’’ to create a world, a specific environment, and artefacts.
Indeed, we can see that the key functions of natural sciences are crucially changed as they become the part of capital circle-flow: the main task is production of potentially commercialized findings which can be constantly reinvested with the purpose of getting innovation. At the same time ‘‘innovation’’ has been shifted from new idea in form of device to provision of more-effective products (technologies) available to markets to a cycle of capital extended development, which uses new technology as permanent resource for its growing.
3. Apparently, the development of scientific knowledge will go in the direction of further synthesis with the technologies, the strengthening of the latter component, and partially the substitution of scientist by machines and scientific knowledge with technologies in two forms: producing artefacts (artificial beings and substances) and machines as well producing skills (techniques, algorithms) of working with information or giving and presentation of information. Now we can clearly see it on the examples of the explosion of emerging technosciences (e.g. artificial intelligence, nanotechnology, biomedicine, systems biology and synthetic biology) or the intervention of neuroscience based on wide use of fMRI brain scans into various areas of human practice and cognition which results in the formation of the so-called ‘‘brain culture’’.
In general, transformation of science into "technoscience" implies that the production of information, technologies and innovations is its key goal. Thus, we can claim that science task is considerably narrowing as it implies the loss of scientific knowledge key function to be the dominant world-view (Weltanschauung). This loss may provoke other significant transformations.
What should a normative theory of argumentation look like?
ABSTRACT. What should a normative theory of argumentation look like?
What makes argumentation reasonable, rational or justified? I address this question by considering two ways of thinking of the relationship between argumentation and reasonableness/rationality/justification that mirror two very different conceptions of what a theory of argumentation should look like. As argumentation theorists, we can either aim at providing criteria for saying that a target-claim is justified, reasonable or rational, or at characterizing justification, rationality or reasonability from the point of view of the practice of arguing.
For the former group of theorists, the main question would be "should we accept this claim on the basis of those reasons?" In turn, for those interested in “characterizing” what is good argumentation, the main question is: "does this piece of argumentation count as good argumentation, taking into account the conception of good argumentation that underlies the practice of arguing?"
Both conceptions of Argumentation Theory assimilate the goals of a normative theory of argumentation to the goals of a theory of justification, but the former focuses on the conditions for considering that a target-claim is justified, whereas the latter tries to characterize the very concept of justification from the point of view of the practice of arguing. In this paper, I analyze the rewards and shortcomings of both epistemological conceptions of Argumentation Theory and their corresponding criteriological and transcendental accounts of the sort of objectivity that good argumentation is able to provide.
References:
Bermejo-Luque, L. (2011) Giving Reasons. A linguistic-pragmatic approach to Argumentation Theory. Dordrecht: Springer
Biro, J., and H. Siegel (1992). “Normativity, Argumentation, and an Epistemic Theory of Fallacies,” in Argumentation Illuminated: Selected Papers from the 1990 International Conference on Argumentation. in Frans H. van Eemeren, R. Grootendorst, J. A. Blair and C. A. Williard. Dordrecht: Foris, 81-103
_____ (2006) “In Defense of the Objective Epistemic Approach to Argumentation,” in Informal Logic Vol. 26, No. 1, 91-101
Booth, A. (2014) “Two Reasons Why Epistemic Reasons Are Not Object-Given Reasons” Philosophy and Phenomenological Research, Vol. 89, No 1, 1–14
Eemeren, F.H. van, & R. Grootendorst (2004) A Systematic Theory of Argumentation. The Pragma-dialectical approach. Cambridge: Cambridge University Press
Feldman, Richard (1994) “Good arguments,” in F. F. Schmitt (Ed.) Socializing epistemology: the social dimensions of knowledge. Lanham, MD: Rowman & Littlefield Publishers, 159-188
Goldman, Alvin. I (2003) “An Epistemological Approach to Argumentation,” in Informal Logic Vol. 23, No.1, 51-63
Hieronymi, P. (2005) “The Wrong Kind of Reason,” in The Journal of Philosophy Vol. 102, No. 9, 437–57
Putnam, H. (1981) Reason, Truth and History. Cambridge: Cambridge University Press
Issues at the intersection between metaphysics and biology
ABSTRACT. Recent work in Metaphysics and in Philosophy of Science, and in particular in Philosophy of Biology, shows a revival of interest in issues that might be considered to be either metaphysical issues that can be further elucidated by recourse to biological cases or metaphysical consequences that some advancements in Biology have. In some cases, the application of some metaphysical notions to classical debates in Philosophy of Biology helps to clarify what is at stake and to solve some misunderstandings in the discussion. The interactions that can take place between Metaphysics and Biology are therefore of different kinds. In my contribution, I will present some examples in which such interaction takes place and will explore the way in which such interaction takes place. In general, I will present interactions between Evolutionary Biology, Genetics and Developmental Biology and metaphysical notions such as dispositions, identity and persistence, and teleology.
Although I will present several examples, I will focus in particular on one or two of them, namely the interaction between metaphysics of dispositions and Genetics, on the one hand, and the one between theories of persistence and the species concept issue in Philosophy of Biology.
I will revise the Dispositionalist theory of causation recently proposed by Mumford and Anjum (2011) and evaluate its explanatory potential and difficulties when it is applied to causal analysis in Biology. My main concern is with the application of their theory to Genetics, something that they do as an illustration of their proposal in chapter 10 of their book. I will try to deploy further the advantages and disadvantages of a dispositionalist conception of genes. After introducing some crucial features of their approach, I will revise the advantages of their conception to account for complex biological phenomena, and its potential to overcome the dispute between gene-centrism and developmentalism. However, I will raise a difficulty for the dispositionalist, namely, the difficulty to defend the simultaneity of cause and effect (essential in their proposal) when epigenetic processes are taken into account. I will focus on a particular phenomenon, the mechanism of RNA alternative splicing and will explore some ways out of the difficulty.
Secondly, I will address the question of whether the persistence of biological species raises any difficulty for the thesis of the metaphysical equivalence between three-dimensionalism (3D) and four-dimensionalism (4D). I will try to show that, even if one assumes that ‘species’ is a homonymous term and refers to two entities (evolverons or synchronic species and phylons or diachronic ones), 3D/4D metaphysical equivalence still holds. My argument lies in challenging the strong association between a synchronic view of species and a 3D theory of persistence, and a diachronic view of species and a 4D theory of persistence.
In the last part of my contribution, I will try to characterize the way in which Metaphysics and Philosophy of Biology interact in those issues.
How Can We Make Sense of the Relationship between Adaptive Thinking and Heuristic in Evolutionary Psychology?
ABSTRACT. Evolutionary psychology was initiated by its pioneers as a discipline to reverse-engineer human psychological mechanisms by largely adopting a forward-looking deductive inference but has subsequently shifted toward a positivism-oriented discipline based on heuristic and empirical testing. On its course, however, the very characteristics which initially defined the methodological advantage of the discipline seem to have been lost; namely, the prospect to predict the human mental constitution from the vantage point of the ancient selection pressures imposed on our ancestors. This is what was supposed to enable the discipline to claim the methodological advantage both over sociobiology and the contemporary cognitive psychology by providing testable predictions about our psychological makeup by way of looking into its deeper root of our evolutionary past. However, with the subsequent trend to emphasize its aspect as heuristics, the roles played by such adaptive thinking has been gradually set aside.
According to Rellihan (2012), the type of adaptive thinking typical of evolutionary psychology is in fact what can be termed as ‘strong adaptationism,’ which is the idea that the force of natural selection is so powerful and overwhelming of any obstacles that the destination of adaptive evolution is uniquely predictable no matter what phenotypes a given population may have started with in the long past --- much stronger version than the one evolutionary psychologists typically think themselves committed to. Thus, the role of adaptive thinking played is more decisive than is normally perceived. Provided this is true, how can we make sense of the relationship between adaptive thinking and the heuristic aspect in evolutionary psychology?
In this talk, I will build on Rellihan’s analysis that “Heuristics are simply less reliable inference strategies and inference strategies are simply more reliable heuristics” (Rellihan 2012) and argue that the distinction between heuristic and adaptive inference may be expedient. If heuristics are not based on largely adaptive thinking that evolutionarily makes sense, they will not bring forth meaningful hypotheses that deserve to be called evolutionary. Evolutionary psychologists make it a rule to name comparative studies, hunter-gatherer studies, or archeology as the sources of inspiration for their hypothesis generation, not just evolutionary theory (e.g., Machery, forthcoming). Still, if adaptive thinking doesn’t constitute an integral part, evolutionary psychology will end up with a mere hodgepodge of heterogeneous bodies of knowledge, which makes us wonder why the whole enterprise ought to be called evolutionary.
In another line of defense, some (e.g., Goldfinch 2015) argue that the task for evolutionary psychology as a heuristic program can end with proposing some interesting hypotheses where the task for other relevant adjacent disciplines of actually confirming them starts. This ‘division of labor’ view of the confirmation strategy will do to some extent but may eventually risk letting go of its disciplinary integration.
References
Rellihan, M. (2012) “Adaptationism and Adaptive Thinking in Evolutionary Psychology,” Philosophical Psychology 25(2): 245-277.
Machery, E. (forthcoming in J. Prinz ed., Oxford Handbook of Philosophy of Psychology) “Discovery and Confirmation in Evolutionary Psychology”.
Goldfinch, A. (2015) Rethinking Evolutionary Psychology, Palgrave Macmillan.
Music cognition and transposition heuristics: a peculiar case of mirror neurons
ABSTRACT. The aim of my presentation is to analyse how models are constructed in contemporary embodied music cognition research. I introduce and discuss the idea of “transposition heuristic” in cognitive science (of music). Utilizing this heuristic, researchers in cognitive science tend to copy and apply ways of thinking about particular concepts from general cognitive science to their own (sub)field of research. Unless done with proper caution, however, such transposition may lead to particular problems.
I will illustrate the use of transposition heuristic with reference to contemporary works in embodied music cognition (e.g., Schiavio et al., 2015; Matyja, 2015). I will show how music cognition researchers tend to take particular concepts (e.g., imagination or simulation) from general cognitive science and apply them to their own field of research (e.g., introducing rather ambiguous concepts of musical imagination or musical simulation). Often, music cognition researchers do not see the need of specifying those concepts. They do, however, construct models on the basis of those unspecified concepts.
In my presentation I argue that transposition research heuristic employed while constructing models in embodied music cognition is often fallible. Initially, such transpositions may be inspiring. They, however, are not enough to provide exhaustive models (the “how-actually” explanations) of how musical processing and musical imagination is embodied. I conclude that the transpositions from general cognitive science to its subdisciplines should be performed with proper caution.
The talk will be structured in the following way.
(1) I begin with introducing the general ideas behind the embodied music cognition research paradigm in cognitive science (e.g., Maes et al., 2014) and its relations to hypothesized simulative function of musical imagination (e.g., Molnar-Szakacs & Overy, 2006; Matyja, 2015).
(2) I will show that in addition to research heuristics in cognitive science already discussed in the literature (Bechtel & Richardson, 2010; Craver & Darden, 2013), a careful analysis of recent developments in music cognition research fleshes out what I dub to be the “transposition heuristics”. (3) I will show that by their nature research heuristics are fallible, sometimes leading to inadequate formulations of both research problems and corresponding theories. In order to illustrate this problem, I return to previously discussed case studies from music cognition research.
(4) I discuss the mechanistic criteria for complete and adequate explanations and show how they relate to my case studies. In particular, I show that contemporary models in embodied music cognition lack accounts on how body and its physical and spatial components (e.g., physical responses to music) shape musical processing.
(5) In the light of what has been discussed, I conclude that transpositions from general cognitive science to its particular sub-disciplines should be performed with proper caution.
Bibliography
Bechtel, W. & Richardson, R. (2010) Discovering complexity: Decomposition and Localisation in Scientific Research. MIT Press.
Craver, C. & Darden, L. (2013). In Search of Mechanisms: Discoveries Across the Life Sciences. University of Chicago Press.
Maes, P.-J., Leman, M., Palmer, C., & Wanderley, M. M. (2014). Action-based effects on music perception. Frontiers in Psychology, 4(January), 1–14. http://doi.org/10.3389/fpsyg.2013.01008
Matyja, J. R. (2015). The next step: mirror neurons, music, and mechanistic explanation.
Frontiers in Psychology, 6(April), 1–3. http://doi.org/10.3389/fpsyg.2015.00409
Molnar-Szakacs, I., & Overy, K. (2006). Music and mirror neurons: from motion to “e”motion. Social Cognitive and Affective Neuroscience, 1(3), 235–41. http://doi.org/10.1093/scan/nsl029
Schiavio, A., Menin, D., & Matyja, J. (2015). Music in the Flesh : Embodied Simulation in Musical Understanding. Psychomusicology: Music, Mind and Brain, Advance On. http://doi.org/http://dx.doi.org/10.1037/pmu0
ABSTRACT. Within contemporary philosophy of perception, it is commonly claimed that flavour experiences are paradigmatic examples of multimodal perceptual experiences (Smith 2013; Stevenson 2014). Typically, the phenomenal character of a flavour experience is determined by the activities of various sensory systems processing, inter alia, gustatory, olfactory, tactile, thermal, and trigeminal data. In fact, virtually all sensory systems, including vision and audition, are believed to influence how we experience flavours. However, there is a strong intuition that not all of these sensory systems make an equal contribution to the phenomenology of flavour experiences. More specifically, it seems that the activities of some sensory systems are constitutive for flavour perception while others merely influence how we experience flavours (see Prescott 2015; Spence 2015).
From the philosophical perspective, addressing the above issue requires explicating what it means to say that some factors are ‘constitutive’ for flavour perception and providing a criterion for distinguishing constitutive and non-constitutive factors. My presentation aims to address this theoretical question in a twofold way. First, a theoretical framework is developed which defines the stronger and weaker senses in which the activities of sensory systems may be constitutive for flavour perception. Second, relying on empirical results in flavour science (e.g., Delwiche 2004; Spence et al. 2014), the constitutive status of activities related to distinct sensory systems in the context of flavour perception is investigated.
In particular, I start by providing a notion of minimal constitutivity that is developed relying on considerations presented in works regarding analytic metaphysics (Wilson 2007) and philosophy of science (Couch 2011; Craver 2007). The main intuition behind my conceptualization of constitutiveness, is that being constitutive is closely connected to being necessary. From this perspective, activities of a sensory system S are minimally constitutive for flavour perception if there is a way of obtaining a flavour experience F such that this way of obtaining it requires the presence of an activity of system S. Subsequently, stronger notions of constitutivity are defined, and I explicate how they can be applied in considerations about flavour perception. Finally, I consider the constitutive status of activities associated with functioning of the selected sensory systems relevant for the flavour perception: olfactory, gustatory, tactile, auditory, and visual. I argue that activities of all these systems, except the visual one, are at least minimally constitutive for flavour perception.
Couch, M. B. (2011). Mechanisms and constitutive relevance. Synthese, 183, 375–388.
Craver, C. (2007). Constitutive Explanatory Relevance. Journal of Philosophical Research, 32, 3-20.
Delwiche, J. (2004). The impact of perceptual interactions on perceived flavor. Food Quality and Preference, 15, 137–146.
Prescott, J. (2015). Multisensory processes in flavour perception and their influence on food choice. Current Opinion in Food Science, 3, 47–52.
Smith, B. C. (2013). The nature of sensory experience: the case of taste and tasting. Phenomenology and Mind, 4, 212-227.
Spence, C. (2015). Multisensory Flavor Perception. Cell, 161, 24-35.
Spence, C., Smith, B. & Auvray, M. (2014). Confusing tastes with flavours. In D. Stokes, M. Matthen, S. Briggs (Eds.), Perception and Its Modalities, Oxford: Oxford University Press, 247-276.
Stevenson, R. J. (2014). Object Concepts in the Chemical Senses. Cognitive Science, 38(2014), 1360–1383.
Wilson, R. A. (2007). A Puzzle about Material Constitution & How to Solve it: Enriching Constitution Views in Metaphysics. Philosophers’ Imprint, 7(5), 1-20.