previous day
all days

View: session overviewtalk overview

09:00-12:30 Session S35: Philosophy of Science 3
Location: BUCH D313
Why Consensus Matters: A Peircean Revision of Longino
DISCUSSANT: Kinley Gillette

ABSTRACT. Both Helen Longino and C.S Peirce argue for the uptake of a community of inquirers to overcome the pitfalls of epistemic individualism in scientific investigation. However, in “Subjects, Power, and Knowledge,” Longino acknowledges that her view needs to overcome the “dilemma of pluralism” to remain inclusive of a diversity of critical oppositional positions. To solve this dilemma, she turns away from consensus as an aim of inquiry. I argue that a Peircean revision of Longino’s communal framework can re-describe the function of diversity in scientific investigation to solve the “dilemma of pluralism” without giving up on consensus.

Laws, Worlds and Material Reasoning
DISCUSSANT: Eileen Nutting

ABSTRACT. This paper aims at two goals: first, to show how Sellars's approach to material reasoning perspective allows us to capture the necessity often attributed to laws of nature, and second, to explain how laws, treated as material inference rules, constrain our descriptions of possible circumstances, including, at least in principle, possible worlds as maximal possible circumstances. Along the way we draw on modal logic and on Wilfrid Sellars’s concept of material implication (Sellars, 1968), with help from a corollary to Dana Scott’s proof that any Scott consequence relation (a reflexive, monotonic and transitive relation on a set of sentences) has a 1/0 semantics makes the connection between the two.

What Counts as Scientific Practice?

ABSTRACT. The practice turn in the philosophy of science is the increase of philosophers of science analyzing entire practice of scientists rather than only analyzing scientific theories. Some have analyzed the reasoning methods of model development (Cartwright 1995), some consider the role certain concepts play as tools in reasoning and draw metaphysical conclusions from these reasoning methods (Waters 2017). There have been a few attempts at providing a taxonomy of the tools of scientific practice given the specific aims of sets of scientists. Hacking’s (1992, 44-50) list of elements that enter into experimental practice, Chang’s (2011, 206) list of mental and physical activities, and Waters’ (2014 6 -7; 2018 CSHPS lecture) investigative matrix all allude to the tools that scientists have at their disposal. Roughly, these tools include theories, procedural know how, and investigative and modeling strategies. Given the practice turn, we should consider the entire matrix of scientific practice for metaphysical analysis. For instance, rather than asking unconstrained metaphysical questions like “what is a gene?” and expecting an answer based on theories of genetics, we should ask instead “what do scientists use the concept of gene for?” (Waters 2017) since the latter answer requires reference to the investigative strategies and procedures of investigation rather than placing importance on theories. The practice turn has led philosophers of science to favour research in non-theory based scientific methods. Reasoning or investigative methods and aims of scientists, however, change over time and across disciplines in many ways. In this presentation, I begin to outline how what counts as practice can change over time and across disciplines. For instance, technological development like computer simulations can highlight the importance of theories (Winsberg 2010) where previously, scientists may not have focused on theories. It is also the case that phenomena treated as theory in one discipline are regarded as solely a tool in another. Consider the emergence of quantum computing and its use in scientific research. Physicists and chemists may consider the quantum computer as a tool to model and simulate complex structures (Lanyon 2012), but for computer scientists, quantum computing is a paradigm theory that demands its own tools like new programming languages and hardware architecture. Many current philosophers of science in the practice-oriented tradition have taken a static approach to answering what is practice without accounting for these types of changes. For instance, Waters considers a specific part of genetics in his version of the investigative matrix, and Hacking only considers experimental sciences in his list of investigative elements. While Chang provides us a way to extend this analysis, he does not do so himself. Chang argues for an account of science as epistemic activities (2011, 212), where all scientific work are activities. Since different activities take place in different circumstances, the norms that govern scientific activities in one research project or time-frame can change across disciplines or over time. This means that practice-oriented philosophers of science ought to account for the changes in what counts as practice across these contexts.

09:00-12:30 Session S55: Moral Psychology 2
Location: BUCH D205
Are moral character judgments unreliable? The epistemic implications of the Mixed Traits Theory

ABSTRACT. Attributions of moral character traits – i.e. virtues and vices, such as honesty, cruelty, and greed – are pervasive in our everyday moral cognition. Indeed, the role of moral character attributions is so widespread that some moral psychologists have described us as “intuitive virtue ethicists” [1–3]. This paper is about the reliability of these intuitive moral trait attributions, and how often we get them right.

Generally, philosophers have been pessimistic about this question – especially proponents of situationism about moral character [4–7]. However, even philosophers who are generally sanguine about trait psychology have argued that most of our intuitions about moral character are probably in error. Surveying a massive body of empirical literature on the determinants of moral behavior, Christian Miller has argued that although people do possess stable moral traits of a sort, the traditional virtues and vices are statistically rare. Rather, most of us possess what he calls “Mixed Traits,” which are moral character traits that fall somewhere in between virtue and vice, and do not correspond to our ordinary trait words or concepts [8]. From this view, it follows that most of our moral character judgments actually miscategorize people as virtuous and vicious, when in fact they are most likely neither. Miller suggests that this is due to correspondence bias [9,10], which leads us to mistakenly the existence of a trait whenever we observe trait-consistent behavior [11].

Because Mixed Traits lead us to behave in morally inconsistent fashion, Miller’s Mixed Traits Theory (MTT) predicts that we should observe widespread disagreement in ordinary virtue and vice judgments: while one observer might infer that a particular individual is compassionate, another observer will be just as likely to conclude that that person is callous. This prediction, however, is not borne out by the data. A range of studies on agreement about moral character show that inter-observer correlations in moral trait judgments are reliably robust [12–14]. What’s more, observers’ moral trait attributions are predictive of real life outcomes, from behavior in certain moral judgment tasks to academic achievement [15–17]. In short, the evidence suggests that our moral character judgments are in fact quite reliable. This is not what widespread error is supposed to look like.

I suggest that rather than falsely categorizing people as virtuous and vicious, ordinary folk are making correct virtue and vice attributions, albeit using different standards than the MTT. This is possible because moral trait terms are gradable adjectives, and thus have the same semantic properties as terms like “tall” or “heavy”. This means that the standards for virtue and vice terms are context sensitive: a sentence like “John is compassionate” may be true in some contexts, but false in others [18]. By focusing on moral behaviors in anonymous experimental situations (wherein moral motivation is most fragile), the MTT tacitly adopts a higher than normal standard for virtue and vice. For ordinary folk, the relevant standards for moral character traits are based on moral actions within existing interpersonal relationships, where moral motivation is more naturally robust.

The Fair Opportunity to Avoid Ignorance
DISCUSSANT: Logan Wigglesworth

ABSTRACT. This paper proposes to apply the fair opportunity to avoid wrongdoing model of moral responsibility to epistemic responsibility. While it is generally acknowledged that mental incapacity can excuse ignorance, ignorance due to diminished opportunity has received far less attention. By drawing on insights from an analysis of duress, I argue that some cases of ignorance, including moral ignorance, may be non-culpable in virtue of being practically justified. This has the upshot that disputes about controversial cases, such as the Ancient Slaveholder, may come down to a disagreement in first-order moral theory over the limits of reasonable partiality.

“Love Alters Not”: A Study of Unrequited Love

ABSTRACT. One-sided romantic love is a puzzle for those who believe that love is strictly rational. Love understood as rational entails that we love for reasons; symmetrically then, we do not love for reasons. Plausibly, among these reasons may very well be whether or not our love is reciprocated. If not having one’s love returned is a reason not to love however, then to love unrequitedly is to do so irrationally. A purely rationalist account of love must therefore take the view that romantic love nearly always begins in error. This is because, if unrequited love is irrational, then romantic love nearly always begins irrationally (for surely it is the case that one person typically loves first—and therefore unrequitedly—even in the case of ultimately reciprocal love). In this paper, I argue that this gives us good reason to doubt that love (broadly understood) is reasons-responsive. If we understand instead that love is actually arational (not reasons-responsive), then why we should love those who do not love us back becomes less of a mystery: we love unrequitedly because the condition that our love is not returned is not sufficient to cause us not to love. This being the case, I defend the following: that though the unrequited romantic lover may have higher-order (specifically prudential) reasons to want not to love, whether they love or not is simply not up to them. I begin in §1 by arguing that a purely rationalist account of love is inadequate because, among other things, it must deny that romantic love can begin in the first place without having begun in error. That romantic love often does begin one-sidedly suggests that whether or not one is loved in return does not play a role in whether or not one comes to love, and it is this feature that I believe plays the crucial role in the possible continuation of unrequited love (what I shall call its “unconditional” nature). In §2 I claim that, just as whether or not one is loved in return does not factor into whether or not one comes to love, it similarly does not factor into whether or not one continues to love; it strikes me that this is how unrequited love is able to persist, even sometimes against the wishes of the lover. This being so, what the unrequited lover ought to do, I believe, is embrace their love, adopting an attitude of affirmation. §3 is an interlude in which I am concerned with the phenomenology of unrequited love, and in §4 is a defense of the (perhaps initially unpalatable) claim from §2, wherein I take a vaguely Kantian position: love (romantic or otherwise) is sublime, whether it is returned or not. Finally, in §5 I entertain a number of objections before concluding with some optimistic closing remarks.

09:00-12:30 Session S60: Feminist Philosophy 2
Location: BUCH D201
Gender Identity Invalidation and Identification

ABSTRACT. Katharine Jenkins proposes a norm-relevancy account of gender identity, according to which to have a gender identity is to feel the norms associated with a gender class as relevant to oneself, and where facts about gender identity determine gender-kind membership. Jenkins defends her theory on the grounds that it includes trans people in the analyses of their identified gender kinds. Jenkins further argues that her theory upholds a norm of first-person epistemic authority, that trans people know what their gender identity, and so their gender, is. I argue that Jenkins’ theory of gender identity fails to uphold this norm. For many, maybe most trans people, the best Jenkins’ theory can do is treat them as if they were the genders they are, rather than in fact including them. I trace this failure to the notion of felt-relevance itself, by drawing from Matthew Andler’s charge that cisnormativity slips in to Jenkins’ account because she theorizes gender identity through the frame of experiences of social positioning typical of cis people. Jenkins holds that her account does not inappropriately centre cis experiences, but that gender identity nevertheless originates in forms of causal contact with dominant gender ideology, ideology which itself presumes cis identity. Though reasonable, Jenkins’ response does not rescue her theory. Drawing on a variety of experiences from trans people, I show that Jenkins’ account of gender identity is unable to contend with the way social reality itself can be invalidating of them. Trans people are often subjected to identity invalidation, which can make such a person feel as relevant norms associated with genders other than the gender they are. Jenkins’ theory, based upon causal contact with dominant gender ideology, allows these responses to be constitutive of the person’s gender identity. By defining gender identities as responses to norms associated with the dominantly constructed gender classes, Jenkins’ theory delivers the result that most trans people will have mixed gender identities, rather than the gender identities they say they have. Consequently, because knowledge is factive, trans people do not know what their gender identities are. To treat trans people as if they are their identified genders is not to in fact include them in their identified gender-kinds. Jenkins’ theory thus fails to fix the inclusion problem. Though the failures of Jenkins’ account are largely inherited from the failures of social reality itself, there is a question whether, given social reality’s failures, a theory of gender identity can be built upon a norm-relevancy account. In closing, I draw from more testimony from trans people to suggest that a theory of gender identity must contend with how subjects navigate themselves, psychologically, materially, and socially. In this vein I consider reasons in support of Talia Mae Bettcher’s self-identification account of gender identity, and resist Jenkins’ accusation that it is deflationary by considering the significance not (just) of having a gender identity but of identifying as this or that gender.

Evil and Feminism: Using Feminist Ethics to Modify Card's Theory of Evil

ABSTRACT. Feminist scholars have been more willing than most to use the language of evil, and some have made significant contributions to our understanding of the concept. However, while feminist scholars are apt to use the term “evil” and discuss specific examples of evils that women suffer, few attempt to offer systematic analyses of the concept. Claudia Card is a notable exception and her work will be a focus of this paper. One thing I find interesting about Card’s work is that, while she is a prominent feminist, and many feminists focus on the centrality of relationships to morality, Card does not seem to consider the importance of relationships to evil actions. I argue that this deficit leads to serious problems for her view. In 2002, Card developed a very influential theory of evil in her book the Atrocity Paradigm. According to this theory, “evils are foreseeable intolerable harms produced by culpable wrongdoing.” Although very plausible and helpful in many respects, this theory implies that evil actions are implausibly prevalent, i.e. that a great many of us perform evil actions on a regular basis, and this implication does not cohere well with the idea that evil is the greatest form of moral condemnation. For instance, on this theory it would be evil to buy clothing made in sweatshops rather than fairly traded products because doing so foreseeably produces (by supporting and maintaining) intolerable harm suffered by sweatshop workers. I call this objection, the implausible prevalence objection. In her 2010 book, Confronting Evils, Card modifies her theory in a way that “narrows the scope of evils.” On the modified version, “evils are reasonably foreseeable intolerable harms produced by inexcusable wrongs.” However, like her initial theory, her modified view is susceptible to the implausible prevalence objection. For most of us have no excuse for supporting and maintaining harms suffered by sweatshop workers when fairly traded products are available. I argue that to respond to the implausible prevalence objection, Cardians should append a relational component to their theories of evil. According to the relational component, to perform an evil action we must be in a sufficiently close relationship with our victim. The term “close relationship” is used here in a technical sense to capture several different, yet related, ideas. One way that a relationship can be close is in terms of physical proximity. Another way is in terms of social connections. Nel Noddings’s feminist ethics provides a theoretical basis to explain the moral relevance of close relationships. According to Noddings, we have strong intrinsically valuable natural impulses to care for people who are close to us. Furthermore, Noddings argues that the purpose of morality is to establish, and maintain, reciprocal caring relationships with particular others. When we produce intolerable harm to victims with whom we are closely related, we ignore, or suppress, strong intrinsically valuable natural impulses to care, and we severely hamper our ability to establish or maintain reciprocal caring relationships with related others. This makes our actions evil rather than merely wrong.

Decommodification as Exploitation

ABSTRACT. The concepts of commodification and exploitation are frequently called upon to explain what's wrong with sales of the human body. Commercial transactions involving body parts and intimate services are typically deemed harmful because something of value is thought to be degraded in the assignment of a price tag to the human body and/or because one party to the arrangement is taken advantage of in a manner marked by coercion and injustice. These are powerful arguments, and philosophical contributions to thinking about commodification and exploitation have been enormously influential in explaining what might be wrong with everything from prostitution and surrogacy, to blood, gamete, and kidney sales, to paid human subject research. The concepts of exploitation and commodification are appealed to so commonly in unison by those who indict body sales, however, that they are often presented as mutually constitutive. The commodification of the human body is problematic, so it is argued, because it is exploitative; and those involved in such sales are exploited because their bodies are being commodified. The upshot of these arguments is that the commodification of the body - and specifically the offer of compensation or the payment of a wage to an individual who sells a body part or intimate service - is inevitably exploitative. But arguments of this sort are extremely problematic, or so I will argue here. The denial of compensation to donors of body parts and providers of intimate services by those who benefit from their contributions, short of preventing their exploitation, quite conversely fosters it. In making this case, I will borrow the very criteria of an exploitative practice offered by anti-commodification theorists themselves. It is thus the anti-commodification theorist's own account of exploitation that is taken as the standard in my analysis, and which I will show to be violated by policies that criminalize compensation in the realm of body sales. I will explore, specifically, regulative policies regarding commercial surrogacy, gamete sales, tissue donation, and blood donation to advance the conclusion that we best prevent exploitation by compensating surrogates, and blood, tissue, and gamete donors fairly rather than by de-commodifying the valuable practices their contributions enable.

09:00-12:30 Session S64: Epistemic Normativity in Groups

In the past fifteen years, epistemologists have tried to determine what groups are epistemically permitted or required to believe, or what agents qua members of a group are epistemically permitted or required to believe. Ontological commitments influence the kind of response we can provide to issues in collective epistemology. Hence, the contributions to the debate on epistemic normativity in groups come from various fields such as epistemology, social philosophy social ontology. This roundtable aims at paving the way for a fruitful dialogue between scholars working in the aforementioned fields.

Location: BUCH D204
The (Virtue) Epistemology of Political Ignorance

ABSTRACT. One typical aim of responsibilist virtue epistemology is to employ the notion of intellectual virtue in pursuit of an ameliorative epistemology. This paper focuses on “political inquiry” as a case study for examining the ameliorative value of intellectual virtue. Researchers in political science and public choice theory have amassed a great deal of evidence suggesting that citizens of large representative democracies are decidedly not very good at acquiring political knowledge. I examine the extent to which improving individual intellectual character can be a way of promoting knowledge acquisition in this context. I find reasons for pessimism in empirical work on the nature of cognitive bias, and considerations about the role institutional factors play in knowledge acquisition. My main claim is that the case of political inquiry threatens to expose responsibilist virtue epistemology in a general way as focusing too narrowly on the role of individual intellectual character traits in attempting to improve our epistemic practices.

Coordination, Expectations, and the Bindingness of Truth

ABSTRACT. Epistemology involves normative sounding things like norms and values. For many philosophers, these distinctly epistemic norms and values are categorically binding. Unlike less important domains like etiquette and games, epistemology is necessarily authoritative or reason-giving. While many accept this, few agree about how exactly to explain, ground, or vindicate categorical epistemic normativity. In virtue of what is there always a good reason to conform to epistemic norms? Recently, epistemologists such as Stephen Grimm, Peter Graham, Baron Reed, and Sanford Goldberg have suggested a social answer to that question. There is good reason to conform to epistemic norms, very roughly, because doing so is socially valuable, useful, or necessary. My goal is to clarify and reject this kind of strategy. Epistemic normativity, I argue, won’t come from the social.

A Puzzle About the Optimization of Collective Reliability

ABSTRACT. In this paper, I unpack and analyze a new puzzle according to which optimizing individual reliability can conflict with optimizing collective reliability. In order to improve the group’s reliability, an agent could sometimes be required to “sacrifice herself” and take standards that are unreliable at the individual level. The paper provides a novel confirmation of the view according to which reliable groups can include some unreliable agents (and vice versa). It also raises various questions concerning the epistemic toleration of individually unreliable agents, epistemic partiality, epistemic coordination within epistemic communities, and so forth.

Epistemic Norms and Collective Responsibility

ABSTRACT. One reason to think that individual agents should follow epistemic norms is that beliefs play a crucial part in the determination of our actions. Being a rational agent implies, at least to some extent, being a rational epistemic agent. Recent evidence in psychology, however, shows that individual human agents are, because of factors that are mostly out of their conscious control, not so good at following the norms that most epistemologists think they should follow. But evidence in psychology also suggests that this deficit in individual rationality could be reduced through collectively instituted devices. In this paper, I argue that if epistemic norms require something of individual agents, they also require something of some collectivities of which individual agents are members. In other words, I argue that epistemic norms require some collective actions.

Group Belief, Deliberation, and Justification

ABSTRACT. There is an assumption in the collective epistemology literature that group epistemic phenomena are necessarily derivative of more fundamental individual epistemic phenomena. I argue that this is not the case. I consider empirical research into the role and function of group deliberation, in particular, Hugo Mercier and Dan Sperber’s interactionist theory of conscious, reflective reasoning. Analysis of this research alongside the question of group justification leads me to argue that we not only rely on our epistemic colleagues’ reliability (i.e. reliable testimony), but also their dialogical engagement. Moreover, I argue that collective epistemic phenomena need to be evaluated in a mannerthat is distinct from epistemic evaluations of a collective’s constituent members.

09:00-12:30 Session S65: Innovative Methods in the Teaching of Logic and Critical Thinking
Location: BUCH D213
Retrieving, Interleaving, and Growing: Small Changes in Teaching for Better Learning
The Carnap Proof Assistant: A Case Study in Open Source Software for Logic Pedagogy
09:00-12:30 Session S66: Ontology of Fiction
Location: BUCH D315
Game of Thrones, Hank Williams Jr., and Aesthetic Viscerality
Creature Features
PRESENTER: Chris Tillman
Fregean Theories of Empty Names
Possibilism about Fictional Objects and the Puzzle of Imaginative Resistance
Fiction and Indeterminate Identity
Exploding Stories and the Limits of Fiction
09:00-12:30 Session S70: Metaethics 2
Location: BUCH D304
Moore on the Unreality of Agent-Relative Value *(Winner of the CPA Student Essay Prize)
DISCUSSANT: Graham Moore

ABSTRACT. In his Principia Ethica, G.E. Moore presents an argument against ethical egoism, the view that each agent ought only pursue their own happiness. We believe that this argument has some striking similarities to another argument that was influential at the time in which Moore wrote the Principia Ethica: J. M. E. McTaggart’s argument against the reality of time. The purpose of this paper is to explore this parallel, both in order to gain a better understanding of the structure of Moore’s argument, and to illuminate possible avenues for response that have not been considered in the literature.

Acting as a Reason

ABSTRACT. Practical knowledge is thought to be necessary for intentional action, non-evidential, and the cause of what it understands. The dominant explanations of these features from cognitivists (like Kieran Setiya) and non-cognitivists (like Sarah Paul) suffer some well-known problems: the former make forming an intention look irrational and the latter explain too much away. In this paper, I argue that intentional action is not acting for a reason but as a reason. I show how this theory can give us an explanation of practical knowledge that avoids the above problems. According to this explanation, practical knowledge is not knowledge gained by reasoning practically but general knowledge about how to act used in practical reasoning.

Moral Deference, Ground Projects, and the Moral Web of Belief

ABSTRACT. Sarah McGrath has pressed moral realism for its difficulty in accounting for the problematic nature of moral deference. This paper argues for an explanation on behalf of the realist. My explanation points to the structure of moral judgments utilizing the ‘web of belief’ analogy and their close connection to ground projects. I argue that the reason we find moral deference problematic is that it indicates that the person might not have any moral beliefs of their own, from which we will begin to suspect that they lack ground projects, have very abnormal ones, or are ill-equipped to carry them out.

09:00-12:30 Session S74: Contemporary European Philosophy/ Philosophie Européenne Contemporaine
Location: BUCH D216
Reason as Cunning: Self-Deception, Self-Reflection and Self-Awareness in Dialectic of Enlightenment

ABSTRACT. The conception of rationality that Horkheimer and Adorno work out in Dialectic of Enlightenment remains controversial to this day. At the heart of the controversy lies their claim that rationality is an instrument that inextricably serves both the purpose of emancipation and that of domination. In support of this claim, they work out an original, if highly peculiar, interpretation of Homer’s Odyssey, and propose that we take Odysseus’s cunning, understood as a capacity to deceive, as a model to understand human rationality. More precisely, they defend the three following claims: first, Homer’s hero, Odysseus, is the “prototype [Urbild]”, or model, of the modern rational self (DE, 35). Second, they claim that cunning is the “organ [Organ]”(DE, 39), or capacity, that enables Odysseus to trick the gods, to survive his adventures, and to turn himself into a rational self in the process. The third claim is the most singular one: they take the practice of sacrifice, characteristic of the universe of magic and myth, as a blueprint to understand the workings of Odysseus’s cunning. More precisely, their claim is that “the moment of deception in sacrifice is the prototype of Odyssean cunning” (DE, 40). On their reading, then, the cunning that Odysseus displays follows the pattern of a strange ritual where the hero “acts as both victim and priest” (DE, 40): he sacrifices himself (his compulsions, needs, and passions) to save himself (or what remains of himself). This combination of priesthood and victimhood raises two concerns: first, modeling rationality on cunning, understood as a capacity to deceive, seems to limit dramatically the scope of its emancipatory promise. This makes enlightenment (or the progress of rationality) into an ambivalent process, now emancipating, now deceiving – or worse: doing both at the same time. The second point is more troubling still: on this reading, Odysseus is the agent of his own deception, which means that his emancipation is paid for with self-deception. This leaves the reader with a set of questions: Is enlightenment doomed? If not, what remains of its promise? And what can a book like Dialectic of Enlightenment actually claim to achieve? In this paper, I want to address these questions through a reassessment of their interpretation of Odysseus’s cunning. In particular, I want to argue that one can read Horkheimer and Adorno’s take on Odysseus’s cunning as an attempt to work out a new model of reflectivity, one that specifically promotes rational self-awareness. To do so, I will focus my examination on two claims that, in my view, have received too little attention: first, the claim that the prototypical self “throws himself away to save himself” (DE, 38); second, that he “wrest himself from dissolution in blind nature” (DE, 42).

Abbreviations: DE: Horkheimer and Adorno. Dialectic of Enlightenment: Philosophical Fragments, transl. E. Jephcott. Stanford: Stanford University Press, 2002.

What Would Be Different: Adorno and Lukács
DISCUSSANT: Stephanie Yu

ABSTRACT. This paper discusses Adorno’s notion of possibility or “what would be different” on the basis of a contrast with Georg Lukács’s concept of objective possibility, inspired by Max Weber and endowed with sociological validity. Somewhat surprisingly, perhaps, Adorno takes a hard line against Lukács, invoking a notion of blocked social possibility against Lukács’s invocation of a higher actuality that informs historical development. Unlike Lukács, as with Hegel before him, Adorno thinks that negativity should have the last word against any higher actuality to which society is called upon to conform.

L’autorité du discours et le discours de l’autorité : les écrits philosophiques de Lénine, cent ans après la Révolution.

ABSTRACT. C’est Lénine qui a initialement impulsé ce que l’on plus tard nommé le marxisme occidental, et non pas Georg Lukács. En fait, l’intérêt renouvelé que la pensée philosophique accorde à Hegel au cours de l’entre-deux-guerres trouve ses origines dans une note manuscrite que Lénine a griffonnée à la marge de l’un de ses carnets de lecture – il faut comprendre Hegel afin de comprendre Marx, pontifie-t-il alors dogmatiquement. Les hégélianistes – Jacques D’Hondt, par exemple – se sont toujours enorgueillis de la proclamation de Lénine, qui a longtemps suffi à justifier l’intérêt qu’ils portent à Hegel, mais, un siècle après Révolution d’Octobre, le temps semble venu de nous interroger sur ce que Lénine a véritablement écrit et ce qu’il espérait accomplir en l’écrivant.

09:00-12:30 Session S8: Philosophy of Perception and Phenomenology
Location: BUCH D307
From Constitution to Institution: Merleau-Ponty's Radical Concept of Synthesis

ABSTRACT. From Constitution to Institution: Merleau-Ponty's Radical Concept of Synthesis

Phenomenology is often criticized as a philosophy that mistakenly privileges consciousness and subjectivity. If phenomenology presupposes subjectivity, then it must be either supplemented or supplanted by a more comprehensive ontology. In this paper, I argue that such a bifurcation of phenomenological and ontological methods is itself ultimately premised on a kind of mind-body dualism, or the idea that consciousness is a world constituting activity. By drawing on Merleau-Ponty’s 1954-5 lectures on Institution, I develop the resources to explain how consciousness is not a meaning constituting activity but rather an instituting-instituted synthesis, that is, the expressive inheritance and transformation of different developmental institutions of meaning. This developmental past of subjectivity is not itself a tacit consciousness or constituting activity, but is an emergence of sense within developmental structures of movement. Phenomenology requires an investigation of structures of sense in their becoming.

I will work out the temporality of institution, both a radical sense of the past as a continually generative call for expression and of the expression of sense as a retrograde restructuring of this past through the expression of the present. I will explore institution at three different, but successively interrelated levels: the evolving emergence of living sensitivity, the self-articulation of individuality through expression and learning, and inter-bodily and symbolic social institutions. Self-consciousness is merely one such institution of sense that takes up and gives new expression to these institutions of nature and culture. The institution of sense is broader, and older, than consciousness. Phenomenology is more than a reduction of meaning to activities of consciousness, because it calls for radical investigations of these developmental institutions. I conclude by pointing to a few broader consequences this new concept of institution has for our understanding of subjectivity in light of Merleau-Ponty’s idea of problematic or “pathological institution”, particularly at a social level.

Dreaming perceptual experience as a problematic for Merleau-Ponty's Phenomenology of Perception.

ABSTRACT. Jennifer Windt’s (2015) synthesis of empirical and philosophical research on dreaming leads her to a number of fascinating, empirically-grounded hypotheses about dream cogitation and dream cognition. Her conclusions suggest that dreaming cannot be categorized as either purely quasi-perceptual (viz Descartes) or imaginative (viz. Sartre), but as sitting somewhere in between such that dreaming should be seen as being both quasi-perceptual and imaginative mental activity (290-292). Though a largely (but not entirely) automatic sub-personal cognitive activity (Metzinger 2013a, Jackendoff 2012), dreams nonetheless duplicate much of waking cognition and waking experience in that there is a sense of self-consciousness (the “dream self”) that is (weakly) embodied and that has been empirically shown to hold beliefs about the dream world in which it is immersed. The dream-self experiences itself in a perceptual world of unified objects which it experiences as mind-independent, even (to varying degrees) in lucid dreams (Windt, 440-450). Dreaming is nonetheless not identical to waking thought: rational thought and critical reflection are slowed and are more difficult. The distinction between what is experienced as perceptual and what is thought is blurred: dream reports frequently assign identity to people or places even when they do not match what is experienced as perceptual data, thought processes are “externalized” into the dream world so that thoughts are experienced as perceptual and mind-independent (Kahn & Hobson, 2005). The following paper will compare these findings to Merleau-Ponty’s arguments on the role perception plays for consciousness in Phenomenology of Perception. It will argue that dream research seems to support at the very least the phenomenologist’s proposal that all consciousness is consciousness of something, and that the thinking self always finds itself already immersed as being-in-the-world. What dream research seems to problematize, however, is the primacy Merleau-Ponty assigns to the body and reflection as conditions for perception. Though dream research has a story to tell about how the mind integrates bodily sense data, perception as Merleau-Ponty understands it, and reflective thought, I argue that it nonetheless suggests that these processes are not as integrated as Merleau-Ponty claims they are as a condition for consciousness.

Is Perceptual Experience Temporally Transparent?

ABSTRACT. Philosophers of perception tend to accept the transparency thesis about perceptual experience. In its general form, the thesis says that when one introspectively attends to one’s perceptual experience, it seems that one can successfully attend to the items one apparently perceives and their properties, and that one cannot successfully attend to the experience itself or its properties (see, for example, Harman 1990). According to the thesis, attention seems to be exclusively drawn to ‘outer’ perceived items and events rather than any private ‘inner’ analogues for them. This paper considers whether the transparency thesis should be accepted as a claim about the temporal features we perceive (like duration, order and simultaneity) and how the temporal version of the transparency thesis should be understood. Some philosophers (e.g., Tye 2003, Hoerl 2018) argue that we should accept a strong version of the transparency thesis for temporal features: that when introspecting one’s perceptual experience one seems to become aware of the temporal properties of perceived events but not of temporal properties of one’s experiences. But other philosophers (e.g., Phillips 2008, Soteriou 2013) reject this interpretation of the temporal transparency thesis, and argue for an interpretation on which we do seem to have introspective access to the temporal features of our experiences, but perceptual experience is nonetheless ‘transparent’ in the sense that we can become aware of such features only by focusing our attention outwards on the events we seem to perceive. These philosophers think that we can become introspectively aware of an apparent ‘match’ between the temporal features of what we seem to perceive and the temporal features of perceptual experience itself. I will defend the strong version of the transparency thesis for temporal features. But I also want to explain why this disagreement occurs: why some philosophers think that conscious experience is temporally self-intimating and others think that it isn’t. A satisfying solution to the question about transparency and time should explain what it could be about temporal phenomenology that gives rise to this disagreement. My answer will be that although we cannot directly introspectively attend to the temporal features of perceptual experience, we have a kind of indirect introspective access to those features—a kind of access that does not exist for other perceptual qualities like colour or shape. This is because there are mental events (including acts of perceptual judgment or directing one's attention) that are not temporally transparent, and which seem to causally determine and be determined by one’s stream of perceptual experience. Since such mental states are not temporally transparent, they constitute a kind of indirect measure of the temporal order of one’s perceptual experiences. I'll argue that this indirect measure does seem to make us aware of a match between experienced temporal features and temporal features of experience, but because the measure is imperfect, only a rough match.

09:00-12:30 Session S82: Book Symposium: Jacqueline Feke's Ptolemy's Philosophy: Mathematics as a Way of Life

This panel will examine the central arguments of Ptolemy’s Philosophy: Mathematics as a Way of Life, the groundbreaking book by Jacqueline Feke (Assistant Professor, University of Waterloo). Just published in October 2018, Feke’s book is the first systematic study of this second-century mathematician’s philosophy. The most significant achievement of this book is Feke’s reconstruction of Ptolemy’s unique and robust philosophical system. This reconstruction yields surprising insights about the--in Feke’s terms--radical and subversive nature of Ptolemy’s philosophical thought. Feke’s book deepens our understanding of Ptolemy as a scholar in two important respects. The first is that Ptolemy was knowledgeable about contemporary philosophical discourses and synthesized ideas from several traditions to construct his own system. The second is that it was ethical concerns that motivated Ptolemy’s mathematical work.

Sylvia Berryman (University of British Columbia)
Jacqueline Feke (University of Waterloo)
Daryn Lehoux (Queens University)
James L. Zainaldin (Harvard University)

Location: BUCH D221
09:00-12:30 Session S87: The Political Philosophy of Harm Reduction

The Political Philosophy of Harm Reduction
An important task for political philosophy is to identify principles, modes of reasoning, and institutional forms that can contribute in normatively acceptable ways to minimizing the harms associated with non-ideal aspects of social and political life. It would be better, for example, if there were no wars, but international legal instruments such as the Geneva Convention seek to regulate the conduct of war so as to bar its most morally egregious potential consequences. Within the domain of public health, interventions that seek to mitigate harmful effects rather than lowering the prevalence of underlying controversial behaviours are often termed “Harm Reduction.” Our panel aims to investigate the grounds, the scope, and the limits of harm reduction policies.
Harm Reduction first emerged in the 1980s as a grassroots movement aimed both at improving downstream conditions (in particular, susceptibility to HIV/AIDS) for vulnerable communities of drug users and (in many cases) advocating for upstream social and political reform. Examples of harm reduction in this context include needle exchanges, overdose prevention (or safe consumption) sites, and Good Samaritan laws that protect drug users who report overdoses. Each of these examples aim to reduce the harms that accompany drug use without necessarily aiming to reduce the prevalence of drug use itself. Harm reduction has since been adopted as a policy framework by politicians, courts, and public health officials in order to address policy challenges that have heretofore more traditionally been addressed through the criminal law. Similar modes of reasoning, and of policy development based on such reasoning, have emerged in a number of policy areas involving behaviour that is widely seen as problematic, but which is difficult to eradicate through prohibitionist policies, or about which reasonable disagreement exists within a pluralistic society. Thus, three decades after its emergence, Harm Reduction is now applied in domains ranging from drug policy and policy related to sex work, to the constitutional regulation of secessionist politics, and many others besides, including female genital mutilation, polygamy, and abortion.
However, philosophers have been slow to take up scholarly inquiry into harm reduction. In this bilingual panel, we offer avenues for philosophical engagement with one of the most influential public and health policy approaches of the last three decades. Since it has been under-theorized by philosophers, the potential avenues for engagement are diverse. Our panel will consider such questions as: is harm reduction a morally distinct approach, justified by its own set of normative principles, or should it simply be understood as an application of consequentialist cost/benefit analysis? If it is a morally distinct approach, to what extent can the principles that animate it be extended and applied to cases beyond public health? Are there moral limits to harm reduction; are some activities (murder, torture) harmful in a way that makes a harm reduction framework morally inappropriate? Does engaging in harm reduction entail complicity with the harms, and is this a reason to avoid it? What is the connection between harm reduction and stigma; does it apply only to stigmatized activities, or can its reach be extended? How should we understand and count the relevant harms that harm reduction aims to reduce? What is the connection between harm reduction and recovery? Is harm reduction an essentially emancipatory practice, or does it undermine the potential for more radical or liberatory approaches?
Harm reduction is not simply a theoretical model; it is an on the ground set of context-specific practices. Successful philosophical theorizing about harm reduction should therefore be informed by close engagement with the empirical details of those practices. In addition to philosophers, our panel will include scholars from law the social studies of medicine and public health practitioners who work in harm reduction organizations.

Tony Mercer, Public Health England, How Philosophy can inform Harm Reduction policy

Ingrid Olsson, Rain City Housing, 'The Vivian’ and multi-layered Harm Reduction

Mathieu Doucet, University of Waterloo, Harm Reduction is a strategy, not a policy: the case of tobacco control

Nicholas King, McGill University, Harm Reduction is Neither

Lindsay Porter, University of Sheffield, Harm Reduction and Moral Desert

Samantha Brennan, University of Guelph, The Moral Limits of Harm Reduction

Daniel Weinstock, McGill University

Location: BUCH D323
14:00-17:00 Session S61: Social/Political Philosophy 4
Location: BUCH D201
Another Lockean Argument for Basic Income

ABSTRACT. In a recent debate on whether or not a Lockean argument for a basic income can be gleaned from Locke’s political writings, Daniel Layman rejects the view forwarded by Daniel Moseley (2011), who in drawing on Hillel Steiner (1994) and Henry George (1997), argues that original collective ownership over the world’s resources entitles persons to a right to equal shares of those resources. According to Moseley’s reading of Locke, once natural resources have been fully appropriated, people may have a right to a basic income as recompense for their original entitlements to equal shares. Layman rejects the left-libertarian contention that persons have a claim to an equal share of natural resources, what he, following A. John Simmons, calls the “Divisible Positive Community” reading of Locke’s claim that the world and its resources were originally held in common. Layman thus also rejects the correlative claim that in a world in which natural resources have been fully appropriated, governments have a duty to provide a basic income if they are not to run afoul of natural law. Instead, Layman defends the “Inclusive Positive Community” reading on the grounds that 1) it better addresses the fundamental tension that motivates Locke’s project, and 2) that it makes better sense of Locke’s provisos. On this view, though the world is originally held in common, what we have an original title to is not equal shares of resources, but the equal opportunity to exercise exclusive control over natural resources for the comfort and preservation of oneself and one’s dependents within the bound of the Natural Law. And on Layman’s understanding of Locke’s proviso that we leave “enough and as good” for others, persons have a right to a sufficient amount of resources for meeting their needs that are as good for providing for their needs. Consequently, governments have a duty to provide citizens with the amount of economic support that brings them up to sufficiency level, and other forms of means-tested welfare may do perfectly well on this.

This paper is an intervention in this debate. While I offer some support in favor of a version of the “Inclusive Positive Community” reading, as well as of the sufficiency requirement, I argue that Layman’s interpretation fails on the very grounds he identifies as lending support to it. I offer an alternative interpretation that meets the desiderata Layman himself sets out. On this interpretation, the nearly universal appropriation of natural resources coupled with the emerging complex of challenges posed by increased automation, changing labor markets, and both the unpredictable and predictable consequences of environmental degradation, increasingly put governments in the position of protecting regimes of private property that run afoul of Locke’s proviso and fail to protect natural rights. While the provision of means-tested welfare support is not sufficient to meet their obligations, I argue that a basic income, in addition to other policies, may hold better promise.

Political Instrumentalism and Power as a Trust

ABSTRACT. In this paper, I introduce two broad approaches to political power: the public trust and the private good views. I argue that instrumentalist accounts of political justification can be usefully interpreted as strict versions of the public trust view. Conceived in this way, political instrumentalism faces a serious challenge: it must give a principled basis for its rejection of the private good view of political power. I consider three prominent accounts of political instrumentalism, and argue that none successfully meets this challenge. Ultimately, my aim is to elucidate the character and significance of the justificatory hurdle facing political instrumentalism.

The Dangers of Meritocracy

ABSTRACT. Although the discursive significance of meritocracy has a history going back at least as far as Plato’s Republic, it formally enters the political lexicon in Michael Young’s 1958 The Rise of the Meritocracy. It comes as a surprise to many that Young’s fictional offering is a dystopic novel. He offers a description of a nightmare world where a new class has mastered the springs of the body politic: where ‘the eminent know that their success is just reward for their own capacity, for their own efforts and for their own undeniable achievement. They deserve to belong to a superior class.”

Young, who drafted the UK Labour Party’s successful 1945 election manifesto, lived through the early drift of his party away from egalitarianism toward meritocracy. His fear that that shift represented a grave danger to progressivism was made concrete with Tony Blair’s New Labour championing of meritocracy as the primary goal of progressive left politics. That the modern UK Conservative Party echoes similar ubiquitous sentiments would not have surprised Young. Similar shifts are evident in Canada and the United States.

The concept of meritocracy has become so embedded in current conceptions of social justice that it can be difficult to sequester this element for analysis. Even egalitarian theorists such as Elizabeth Anderson often fail to fully engage the implications of the social entrenchment of a meritocracy as a potential threat to egalitarian social arrangements. Recently, the philosopher Kwame Anthony Appiah, and popular social commentators such as Ross Douthat, have drawn public attention to the value of Young’s original casting of the problem.

Drawing on Young’s analysis, I suggest that in the absence of a strong commitment to material equality, the pursuing of meritocracy as an ideal is both unstable and, ultimately, self-defeating. I argue, further, that the attempt to deliver a progressive politics along meritocratic lines is particularly toxic because it rationalizes inequality in a unique way: effectively anaesthetizing the otherwise enabling moral sentiments of shame and natural justice, with a seductive alternative pairing of economic efficiency and corresponding social stratification. In brief, the patchy but nonetheless real successes realized in pursuing a commitment to meritocracy have created a new ruling class that has effectively inoculated itself against even the gentle tug of noblesse oblige that conservative thought has [nominally] championed historically.

Many progressives advocate a true meritocracy premised on the belief that the political right’s advocacy of meritocracy is disingenuous rhetoric operating merely to camouflage the skewed concentration of arbitrary social power. We are told, only progressives can really deliver on the promise of meritocracy. I argue that this is a mistake. In fact, it is increasingly clear that it is the success of a [partially] realized meritocracy that now powers the mechanisms for the transmission and reproduction of a new, shameless, form of class privilege.

Intellectual Humility: Situating the Virtue in the Context of Academic Philosophy
14:00-17:30 Session S68: The Authority of Normativity
Location: BUCH D313
Explaining The Normativity of Normative Thought

ABSTRACT. Normative thought has a special mode of presentation: It presents considerations as inherently, authoritatively guiding. Take thoughts about reasons, like the thought *that this action would avoid agony is a reason to do it*. Attending to the mode of presentation, it seems as though some stance-independent fact—that this action would avoid agony—counts in favor of performing the action, where counting in favor of is one way to be inherently, authoritatively guiding. This paper asks some neglected questions about this mode of presentation. Why do some thoughts present like this? Can we explain why normative thoughts present like this? And instead of assuming that this mode of presentation is a window onto a sui generis reality of inherent, authoritatively guiding relations, I argue that we can partially explain the normative mode of presentation with a subjectivist metaethic.

Rousseauian Metaethical Constructivism and the Contingent Grip of Moral Thought

ABSTRACT. According to Humean metaethical constructivism, the reasons that apply to any agent are those and only those harbored by that agent’s practical viewpoint, and it is a contingent matter whether any particular agent’s practical viewpoint harbors what most of us would recognize as moral reasons. One common worry about this Humean view is that recognition of the contingency of one's moral reasons undermines such reasons' motivational efficacy, particularly in the face of countervailing impulses. A second is that if Humean constructivism is correct, then the justification of holding any agent accountable for what we would recognize as immoral actions implausibly turns on whether the putatively accountable agent's practical viewpoint happens to harbor reasons forbidding those actions. In response to these concerns, I argue that once we clarify key details of our practical viewpoints and accountability practices, we will see that (1) our practical viewpoints generate formidable motivational commitments to the reasons they harbor, (2) holding agents who do not share our moral reasons accountable for flouting them is coherent and contingently permissible (as dictated by the content of our practical viewpoints), and (3) our accountability practices play a key role in bolstering agents' motivational commitments to reasons with a recognizably moral form, but the widespread grip of practical viewpoints that harbor reasons with an egalitarian moral content is a contingent and fragile social achievement. Because much of the paper's argument relies on Rousseau's idea that a fundamental, pervasive, and potent human drive is the need to be esteemed by others, be it as an equal or a superior, I refer to the resulting variant of the Humean view as Rousseauian metaethical constructivism.  


The Metaethics of Insanity and the Normative Authority of Law
Making Sense of Reasons for Gratitude: A Phenomenological Approach

ABSTRACT.   There is a puzzle over the idea of a debt of gratitude: if to incur a debt is to incur a duty, then how can there be a debt of gratitude? After all, a genuine act of beneficence is where a benefactor confers a benefit upon a recipient with “no strings attached;” it represents a gift. In this way, it creates a special relationship between benefactor and beneficiary that is unlike a lender-borrower relationship in which the borrower is duty-bound to repay her financial debt to the lender. But then the very idea of a debt or duty of gratitude seems to cast the benefactor-beneficiary relationship as a kind lender-borrower relationship, contrary to how gratitude is typically experienced. As Claudia Card (1988) once quipped: “A duty of gratitude sounds like a joke.”

    Philosophical reactions to this puzzle vary from attempts to defend the idea of a debt of gratitude (Manela 2015) to rejecting the idea of any such debt (Wellman 1999). We argue that (i) this alleged puzzle and the various philosophical reactions to it result from philosophers working with a limited set of philosophical resources and that (ii) careful reflection on the phenomenology of ordinary instances of gratitude reveals that there is a distinctive role for reasons that is normative for gratitude—reasons of reciprocity—which preserves and make sense of there being debts of gratitude without those debts being duties.

14:00-17:30 Session S71: Applied Ethics 4
Location: BUCH D315
What Do Climate Change Winners Owe?
DISCUSSANT: Gregory Andres

ABSTRACT. A complete treatment of the issue of climate justice must explicitly account for the fact that there are some who end up overall benefiting from the changing climate, which we can call “climate change winners”. This means that the positive climate externalities affecting these groups outweigh the negative externalities, where the externalities are effects of emitting greenhouse gases which are unpriced by the emitters. To our knowledge, climate change winners have not been discussed in the climate ethics literature. We argue for a surprising claim about these winners: they do owe compensation for their gains from climate change, but to the emitters, not to the losers. This is because the emitters should compensate the losers in full, and that is of first priority, but insofar as the emitters generate positive externalities, they are also entitled to compensation from the climate change winners.

Our first claim is that climate net winners should not be able to simply enjoy their gains. We consider two arguments for the claims that they should. First, often our intuitions about gains and losses are asymmetrical. For instance, we sometimes think that those who do damage to others owe compensation but those who benefit others through their actions do not deserve anything. In response, we point out that asymmetric intuitions do not imply asymmetric normative claims and also that it would be more efficient for society if both the positive and negative externalities are treated symmetrically and are subject to compensation accordingly. Second, one might think that they deserve or merited their gains by actively responding to climate conditions. In response, we say that distinguishing between active and passive responses to climate change is very difficult, but insofar as this distinction can be motivated, we grant to the objectors that active winners may be less subject to obligations than passive winners.

Our second claim is that the climate net winners owe their winnings to the emitters, not to the climate net losers. We appeal to what we call a hierarchy of transfers: there is priority in redressing the loss of the net losers, but they are the responsibility of emitters, even if the cost of redressing them is beyond the initial benefits of emission. First, an account which treats gains and losses symmetrically has theoretical virtues in terms of simplicity, explanatory power and applicability. Second, from a social point of view, we should reward those who generate positive externalities in line with their contributions to society.

Le cas de l’interprétation de la notion « éthique » dans les travaux du GIEC

ABSTRACT. Les rapports d’évaluation du Groupe d'experts intergouvernemental sur l'évolution du climat (GIEC) balisent les connaissances de la communauté de recherche globale en sciences et en politiques climatiques. En 2014, le groupe de travail III « atténuation du changement climatique », avec la contribution de plusieurs experts, dont quelques philosophes (John Broome, Lukas Meyer, etc.), a produit le premier chapitre du GIEC portant sur l'éthique. L’intention de ce chapitre (GIEC, 2014, chap.3), sur lequel nous focaliserons notre attention, n’est pas de répondre directement à quelques problèmes moraux du type pourquoi agir face aux changements climatiques? Ou encore, quelles sont nos responsabilités morales envers les générations futures en matière de changements climatiques? On vise plutôt dans ce chapitre du GIEC à fournir aux décideurs des outils éthiques (concepts, principes, arguments, méthodes) d’appui à la prise de décision. Les deux principales dimensions de l’éthique du climat évoquées dans ce rapport du GIEC, soit les questions de justice et de valeurs, nous amènent à soupeser une variété d’arguments distincts de par leurs visées. D’un côté, on considère les enjeux éthiques en termes de droits, de responsabilité morale, de responsabilité historique, de justice intragénérationnelle et de justice intergénérationnelle. On pense ici aux discussions autour des seuils (et des compensations) d’émissions de GES jugés justes et équitables en regard des dommages asymétriques entre pays développés et pays en voie de développement; populations nanties et populations défavorisées économiquement et socialement; générations présentes et générations futures. De l’autre côté, on met l’accent sur les valeurs en présence dans les discussions politiques ou celles impulsant les actions concrètes en matière de gestion des risques climatiques. Dans le cadre de cette communication, nous soutiendrons, d’une part, que les documents du GIEC traitant de la notion éthique négligent certains développements récents en éthique du climat, comme les débats autour des théories non idéales et des théories idéales en justice climatique (Gajevic Sayegh, 2016; Heyward et Roser, 2016). D’autre part, il s’agira de critiquer une conception étroite de l’éthique du climat partagée entre théories de la justice et théories des valeurs. La catégorie éthique des changements climatiques est bien plus vaste que le seul domaine de la justice climatique. D’un autre côté, on sait que dans la littérature anglaise (Gardiner et al., 2010; Shue, 2014; Roser et Seidel, 2017), le secteur de la justice climatique domine le champ de l’éthique des changements climatiques qui peut certainement se comprendre à partir d'autres ancrages théoriques (éthique des vertus, éthique du care, éthique pragmatiste, écoféminisme, éthique de la Terre, etc.). L’interprétation critique et empirique de la notion éthique dans les travaux du GIEC appuie ce raisonnement.

The More Speech Objection: Mill, Langton, and Antifa

ABSTRACT. The recent realization that there are anti-racist activists who are willing to punch Nazis has lead to an intense debate on the norms of liberal societies. One of the main objections to responding to hate speech with violence is the more speech objection, which states that the only appropriate way to respond to hate speech is with more speech. More speech advocates generally understand speech to mean assessing the veridicality of the claims made, meaning that they are making a much wider objection than one simply against violence; instead they are objecting to the diversity of tactics approach to combating hate speech. In order to better understand this objection, I will trace its roots from John Stuart Mill’s harm principle to its use by contemporary political pundits. I will then assess the objection by examining if more speech is sufficient to combat hate speech. Drawing on Rae Langton’s application of speech act theory to pornography and Lynne Tirrell’s inferentialist analysis of the genocidal rhetoric in Rwanda, I will argue that that hate speech is harmful in ways that makes it particularly difficult to counter with more speech. I point to two pragmatic functions of hate speech that cannot be countered by more speech alone: epistemic diminishment and illocutionary hijacking. Epistemic diminishment is the way in which the inferential links between racist speech and the essentializing hierarchies of classical racialism undermines the speech acts of its targets. Illocutionary hijacking is the way in which hate speech can recruit for support the very speech acts that are meant to counter it. The existence of these two phenomena show that more speech alone is not sufficient to counter hate speech. While this does not provide an answer to the question of whether violence is an appropriate response, it does defuse one of the major objections to antifascist action and show that countering hate speech will require a wider set of tactics than merely more speech.

Answering for the Past

ABSTRACT. In September 2018, the Senate Judiciary Committee held public hearings on Justice Brett Kavanaugh's Supreme Court nomination. These hearings included a testimony from Christine Blasey Ford concerning an alleged assault perpetrated by the Justice when he was a teenager. During this process, commenters repeatedly emphasized the passage of time. The fact that the incident occurred thirty-six years ago seemed to be frequently mentioned in an effort to question the accuracy of her memory, undermine her motives, but also serve an exculpatory function. As Rep. Kevin Cramer commented:

What if something like what Dr. Ford describes happened — it’s tragic, it’s unfortunate, it’s terrible, it should never happen in our society — but what if [there’s] 36 years of a record where there’s nothing like that again, but instead there’s a record of a perfect gentleman, of an intellect, of a stellar judge?”

Cramer did not question the reality of events. Instead, he suggested that the passage of time was somehow exculpatory, especially with the added qualification of supposed good behaviour during that time. In this paper I would like to explore this kind of defence and ask whether it is possible to remain responsible for crimes committed in the distant past. I ask, like Cramer, “Even if it’s all true, does it disqualify him?”

This paper will explore Angela Smith’s account of responsibility as answerability and show how her conceptual frame may be extended to answer questions of responsibility over-time. Drawing on Delia Graff and her work on interest relativity, I outline the conditions to maintain answerability. I will then use this framework to show how an offender could remain responsible despite what might be a thirty-six year gap from the offence. What matters is whether the offender maintains conditions of answerability and these have no time limit.

Interestingly, the answers concerning responsibility will also prompt a second line of inquiry. I ask, when offenders do not satisfy conditions of answerability, might it still be appropriate to blame them? While this question may not at first seem all too different, I will review recent theories (particularly from theorists such as T.M Scanlon, Michael McKenna, Angela Smith and Miranda Fricker) that position blame as essentially communicative. Conditions to forswear blame include the offender having received the message the blame aimed to communicate and respond in a way that assures the victim that the damaged relationship can be adequately mended to some extent. Thus, to ask whether the offender remains responsible for a crime is different than asking if he is to blame for it. We have reason to think that even if an offender’s claims of good behaviour from when the offence was committed was actually sincere, he may very well still be to blame depending on his response towards the victim. Blame can extend even when conditions of answerability no longer hold.

14:00-17:30 Session S72: Epistemology 4
Location: BUCH D323
Epistemic Processes and Socially Problematic Beliefs

ABSTRACT. Today’s socio-political milieu leaves no shortage of socially problematic beliefs. Our neighbours to the South seem to demonstrate this weekly: every public discussion highlights a deep ideological divide. Racism’s roots in Canada and the United States leave lingering effects. Perpetuation of bigotry, often disguised as religious commitment, lives on. Deep-seated beliefs like these are the kind we want to condemn; but we also want to convince those holding problematic beliefs to change their minds. There is good reason to think socially problematic beliefs can be acquired through rational means. Fundamentalist communities, like secular communities, have social structures in place informing the individual’s conception of which members can act as ideological authorities. Additionally, whether individuals are secluded in a fundamentalist community or are members of secular society, they rely on trustworthy individuals to convey information for knowledge acquisition. Functionally, the role of epistemic authority/epistemic trust is very similar, regardless of social conditions. Individuals rely on members of their own epistemic tribe – those on whom a person relies for survival, companionship, and delectation – to share reliable information. Since belief acquisition and persistence is social, belief change is difficult to bring about. There is empirical evidence in cognitive science suggesting we are not in a psychological position to assess our own beliefs, and moreover, mere contemplation rarely brings about a change in perspective. Since individuals cannot investigate the grounding of every belief they hold, we can see why the bulk of belief acquisition occurs through an individual’s social and cultural groups (Smith, 2016: p. 33). Significant belief change, then, must overcome the epistemic social structures in place – belief change requires groundwork and is typically not an instantaneous event. A person who is socially comfortable is unlikely to undergo significant belief revision in the first place. Those who are content appear to be poor candidates for change, since stable environments are not conducive to conversion experiences (Smith, 2016: 42-43). In extending the above considerations, we highlight a serious problem. If individuals are rational and comfortable in their problematic beliefs, how could we change their minds? In my research, I search for an epistemic opportunity for normativity when it comes to deeply held, yet socially problematic, beliefs. Individuals clinging to bigotry are those we not only want to condemn, but also want to convince otherwise. If argumentation and discussion do not present opportunities for changing someone’s mind, then maybe we should explore other options. If a fundamentalist is justified in her beliefs, just like the secular liberal, we may not have many tools at our disposal, though. In other words, I discuss our potential options: are there any available sources for normatively evaluating (epistemically) socially problematic beliefs that are, for the agent, rational in a particular social environment?

Understanding Others

ABSTRACT. Consider the following case.

Hannah believes in gender equality, but growing up in a patriarchal society she picked up the assumption that her duty as a wife was most important. Consequently, Hannah has devoted most of her time and energy to her marriage at the cost of excelling in her profession. Besides, she feels an exhausting sense of guilt when she invests too much time or energy at work. However, at some deeper level she resents this, which has a toll on her marriage. Recognising all of this, Sarah, a close friend of Hannah’s, points out to Hannah that she didn’t have a duty to sacrifice her work for the sake of her marriage to the degree that she tended to. Hannah acts according to this advice and nothing else of significance changes in her life. Eventually, her marriage stabilizes as a balance begins to emerge between her marital commitments and work, though acting in accordance with the advice comes at some emotional cost, such as bouts of shame at neglecting her adaptive preference to be first and foremost a dutiful wife, and exhaustion caused by trying to discharge the responsibilities of a traditional wife and to excel in her profession. But Sarah’s counsel keeps her from giving up.

Sarah understands Hannah; what does her understanding entail? According to a prominent non-naturalist, understanding people is different from understanding natural phenomena for the former is “not amenable to a third-person, objectivizing stance that is characteristic of the natural sciences” (Grimm 2015). On non-naturalism, to understand others one needs to be able to see their goals as choice-worthy or desirable, and to recognize them oneself as that. However, what is distinctive about the case above is that Sarah understands that there is a gap (GAP) between the values that guide Hannah’s actions and those that, deep down, Hannah considers choice-worthy. Moreover, she understands that Hannah’s inner world must be affected by GAP. Someone who doesn’t get this about Hannah doesn’t really understand her. Sarah’s understanding would be inadequate if, in accordance with non-naturalism, she could see and recognize for herself the goals that guide Hannah’s actions. Because of GAP there is something specific it is like to be in a situation like Hannah’s, but if Sarah tries to get into Hannah’s shoes to understand that, Sarah will lose the epistemic privilege she has from a third-person perspective to understand that there is GAP in Hannah’s life, and that it is morally important. Re-enacting Hannah’s perspective obstructs getting genuine understanding of Hannah, for things are not luminous from there. Giving up the doctrine of the guise of the good won’t help the non-naturalists, for assuming that Hannah doesn’t regard the ends she desires as good is orthogonal to how GAP influences her experiences. Experience provides us with feedback – e.g, the way Hannah’s life goes once she starts acting on Sarah’s advice – that can show how Sarah’s understanding of Hannah is better than someone else’s who merely re-enacts her perspective.

14:00-17:00 Session S76: Indigenous Philosophy
Location: BUCH D307
Neither White nor Ward: Métis Racialization and The Formation of Whiteness in Treaty Era Canadian Prairies (1870-1920)
DISCUSSANT: Sandra Tomsons

ABSTRACT. On June 12, 1885, a Métis woman named Lucie Gladu proved she was ‘civilized’ enough to transfer from an ‘Indian’ to a ‘Half-Breed’ through Canada’s bureaucratic racial categories. Yet, in the census of 1901, Lucie is recorded as ‘R’, for ‘Red’ — or ‘Indian’; her children, due to their white father, were labelled as ‘White’ (Adese 2011). Lucie could either be ‘Indian’ or ‘White’, but never who she really was: Métis. Losing Métis identity in this system of racial dualisms is by no means uncommon in the history of the Métis. Neither white nor ward, the Métis disrupt colonially imposed racial binaries while also shaping settler ‘whiteness’. I situate my dissertation in the Canadian prairies during the ‘Treaty Era’ of Canadian-Indigenous relations (1870-1920) where Métis racialization was first systematized in conjunction with an emergent Canadian settler whiteness. In researching this emergence I address two problematics. First, I investigate which historic notions of race pigeonholed the Métis into colonially imposed binaristic racial categories, incongruous with their identities as Métis. Next I disclose how these racial categories shaped Canadian whiteness. While much important scholarship unearths the contemporary avenues by which the Canadian state misrecognizes the Métis as “culturally ambivalent” (Van Kirk 1985) or ‘racially mixed’ (as opposed to an ethnically distinct and nationally conscious People), the scope of this research remains limited to the post-1982 ‘Charter Era’ of Canadian politics (Andersen 2008, 2011, 2011a, 2013; Durand et al. 2016; Godmann and S. Delic 2014; Macdougall 2014). My study fills this gap in the extant scholarship by shifting historical focus and critically examining the overlooked origins of Canadian settler identities. How did the Métis problematize Canadian ‘whiteness’? How was this ‘whiteness’ shaped by presumed Métis racial ‘mixedness’? Embarking from these questions, I argue that Canadian ‘whiteness’ emerged from Métis racialization. I see value in Métis sociologist Chris Andersen’s notion of “racialization” as a process by “which certain physical and cultural differences are emphasized, elevated, and distinguished between such that races are produced and legitimized” (2011:57). By tracking how the discursive ‘racialization’ strategies codified through the legislation and census categories of 1870-1920 adapted to produce (white) citizens out of the Métis through hierarchically organized and scientifically spurious ‘racial orders of things’ (Foucault 1970; Stoler 1995), my research draws from the deep well of ‘governmentality’ literature concerned with the production of citizens (Barry et al. 1996; Burshell et al. 2001; Curtis 2001; Dean 2010; Rose 1999: ch.6). I also draw on David Scott’s (1995) “colonial governmentality,” and the related canon of Indigenous scholarship (Alfred 1995; Day 2000; Moreton-Robinson 2003; Razack 2002; Simpson 2000). For more than a century and a half the Métis have endured the dilemmatic imposition of a binary ‘racial order of things’ (Foucault 1970; Stoler 1995) or hybrid ‘mixedness’: denied the authenticity and stability of ‘an Indian or white identity’ (Macdougall 426). Perhaps we can reject this colonial binary by looking to the historic origins of how Canadian whiteness was formed concurrently with Métis racialization.

Whose land? The pedagogical power and philosophical limits of "connecting to nature"

ABSTRACT. I explore ethical problems with the common educational aim of helping students “connect to nature.” Outdoor initiatives guided by such an aim often generate great enthusiasm, but they have a dark underside. Is “connection to nature” a valid and effective educational ideal? Or is it an ideal born out of colonial and economic privilege, conceptually and politically flawed, ignoring Indigenous inhabitance, and attempting to impose a romantic vision of nature on students whose vision is elsewhere? Can we help students “connect to nature” yet also come to understand the history of colonization surrounding the land we are on?

Bruce Ferguson and Indigenous Philosophy
DISCUSSANT: Alison Wylie
14:00-17:00 Session S79: Philosophy of Social Science
Location: BUCH D205
Epistemic autoregulation of social systems: a conceptual framework applied to the case of the Bank of Canada

ABSTRACT. Socioepistemic systems are entities that “house social practices, procedures, institutions, and/or patterns of interpersonal influence that affect the epistemic outcomes of its members” (Goldman and Whitcomb 2011). As such, we can think of them as institutional and infrastructural arrangements wherein multiple individuals are interacting to attain an epistemic goal, the precise identity of these individual not being a determinant factor of the persistence of the system. This notion, originating from the field of system-oriented social epistemology, is of great relevance for many philosophical, epistemological and political issues. This talk will focus on the capacity of socioepistemic systems to regulate themselves toward attaining better epistemic outcomes, which is coined as “epistemic autoregulation”. In contexts like public policy, for our deference to the authority of expert organizations to be rational, we need to be able to to reevaluate our degree of justification. Therefore, I will argue in favor of a way to undertake empirically informed conceptual analysis of socioepistemic systems. We will proceed by exemplification, using the case of the research infrastructure at the Bank of Canada.

Central banks are organizations responsible for monetary policy in a given currency zone. They were given a high degree of operational independence in the 90s, and they provide most of the academic research on the matter (Dietsch, Claveau, and Fontan 2018). While some scholars are attempting to demystify the epistemological practices of central banks (Marcussen 2009; White 2005; Mudge and Vauchez 2016), the complete picture is too obscure to allow for a satisfying evaluation of their expertise. Indeed, it shall not suffice to say that our deference to central banks is justified by an apparent “scientization”. This new ‘ethos’ of central banking could mask unintended (detrimental) consequences (Claveau and Dion 2018), so we ought to gather more information.

To this end, we deployed computer assisted text analysis methods (Aggarwal and Zhai 2012; Poudat and Landragin 2017) to gather and analyze a large corpus of official documents (e.g. speeches, research papers, annual reports) originating from the Bank of Canada. Efforts were aimed at conceptualizing a “grid of analysis” of its research infrastructure. Such a grid is useful because scholars in social epistemology and STS (science, technology and society) have identified beneficial characteristics for socioepistemic systems (for example, in philosophy: Longino 1990, 2002; Simon 2010). Therefore, by accurately describing the system, we can evaluate to which extent it is emulating these beneficial characteristics. My contention being that, the more the system autoregulates itself toward the ideal, the more our deference to its authority is rational. To sum up, the contribution made by this talk is double. First, we utilize the philosophical notion of socioepistemic systems to conceptualize and evaluate a complex entity which is found at the heart of contemporary public policy. Consequently, comparable philosophical issues could profit from this conceptual framework. Second, computer assisted text analysis is fairly new to this type of enterprise. As such, the methodological challenges touched on by this talk could hopefully be useful for philosophers who want to exploit similar methods.

Lessons From Economics: How Equilibrium Explanations Fail

ABSTRACT. My paper investigates whether equilibrium explanations are successful in economics. Philosophers of science tout the success of equilibrium explanations in numerous disciplines. However, I show how they fail in economics, and then extract lessons that apply to other disciplines. Equilibrium explanations are highly idealized explanations of dynamic systems through an equilibrium. Said explanations remove all causal information to reveal a system’s deeper, underlying structural relationships. Thus, philosophers generally treat equilibrium explanations as non-causal explanations (Sober 1983). Removing causal information allows equilibrium explanations to apply to numerous, heterogeneous systems. This explanatory scope provides us with a deep understanding of dynamic behavior. The depth and scope of equilibrium explanations are their success (Potochinik 2007; Rice 2012, 2015; Strevens 2008). For example, Leon Walras (1954) provides an equilibrium explanation for why the supply of an asset matches the demand for that asset at a price. The explanation assumes that there is a set quantity of an asset in any market, that individuals are assigned a distribution of fixed preferences for that asset, and that individuals desire to trade according to their preferences. If individuals demand more of that asset than the market supplies, the asset’s price increases; if the market supplies more of the asset than individuals demand, the asset’s price decreases. Individuals will increase or decrease their supply of (demand for) that asset until trade converges towards a price at the fixed quantity. Hence, the asset’s price converges towards an equilibrium between the supply of and the demand for that asset. The result is a price agreement, which is an equilibrium. A further consequence is that markets are Pareto optimal and maximally efficient at allocating assets. Yet there is mounting empirical evidence that asset prices are rarely in equilibrium in real-world markets (Arthur 2015). The empirical evidence also shows that markets are neither Pareto optimal nor efficient at allocating assets. For example, it is a typical market scenario that a change in asset prices can lead to rapid short-term capital gains and losses and destabilize the market. Individuals desire information relevant to fluctuations in asset prices. Those first individuals who observe an increase in an asset’s price might infer that this increase is the market’s reaction to changes in its conditions. Said individuals might profit by buying and hoarding the asset in anticipation of further price increases. However, this activity itself affects the asset’s price, which in turn results in new hoarding by more individuals, which further amplifies the destabilizing process. Non-equilibrium dynamics is prevalent, and the above scenario is one of many where equilibrium explanations have little application to real-world markets (Bowles et al. 2017; Epstein and Axtell 1996). Hence, the explanatory depth and scope of equilibrium explanations, which are a success in other disciplines, are a failure in economics. I conclude that including causal information in equilibrium explanations increases their explanatory success in economics, and better captures the reality that economic behavior is rarely in equilibrium.

Le risque et les événements indésirables : une proposition ontologique

ABSTRACT. Qu'est-ce qu'un risque ? Le mot « risque », dans les contextes non techniques, réfère de manière vague à des situations dans lesquelles il est possible et incertain qu'un événement indésirable surviendra (Hansson, 2018). Cet aspect indésirable des risques est également inclus dans les définitions techniques du risque. Par exemple, le risque est parfois défini comme « la cause d'un événement indésirable, possible et incertain », ou parfois comme « la probabilité d'un événement indésirable, possible et incertain » (Ibid.). Rescher (1983), à l'image des définitions ici proposées, note d'ailleurs qu'un risque a deux traits caractéristiques : 1) l'anticipation d'un événement (possible ou probable) et 2) le caractère indésirable de cet événement pour au moins un agent.

En vertu de quoi un événement est-il indésirable pour un agent ? Cela dépend, au moins en partie, des préférences de cet agent. Supposons, par exemple, que le bulletin météo annonce qu'il est possible qu'il fasse froid demain. Est-ce qu'il y a un risque, pour un agent donné, d'avoir froid demain ? S'il préfère avoir chaud, il y a un risque, et s'il préfère avoir froid, il n'y en a pas forcément un. D'autre part, un même événement peut être indésirable pour un agent et désirable pour un autre agent. Dans un jeu, par exemple, la défaite des autres agent est désirable pour l'agent qui gagne la partie.

L'idée qu'un risque est en relation avec les attitudes d'un agent est commune dans la littérature sur le risque. Douglas et Wildavsky (1982), par exemple, conçoivent un risque comme le résultat de la sélection d'un danger par une communauté en fonction des valeurs et des intérêts de cette communauté. Je considère que ces attitudes déterminent les préférences d'un agent, et que ces préférences déterminent quels événements possibles sont indésirables pour cet agent. Plusieurs conceptions des préférences sont proposées dans la littérature (Hausman, 2012). Par exemple, certains chercheurs conçoivent les préférences comme des évaluations comparatives subjectives (Hansson et Grüne-Yanoff, 2018).

Il s'avère ainsi primordial, pour comprendre ce qu'est un risque, d'analyser le rapport entre le risque et les préférences d'un agent. À cette fin, je propose d'abord une conception originale du risque comme un pouvoir causal. Cette conception tenant compte des deux aspects du risque indiqués par Rescher. Je propose ensuite une conception des préférences s'appuyant sur les conceptions proposées dans la littérature. Je propose finalement une méthode pour déterminer quels événements possibles sont indésirables pour un agent en fonction des préférences de cet agent. Ces trois éléments constituent le fondement d'une formalisation ontologique du risque pouvant s'articuler avec les ontologies appliquées du domaine biomédical.

Inclusive Anchor Individualism: A Comparative Model of Individualist Social Ontologies

ABSTRACT. In dialogue with Epstein’s criticism of individualist metaphysics of frame principles (e.g. Hart’s rules of recognition, Searle’s constitutive rules)(Epstein 2014a, 2014b, 2015, 2016a, and 2016b), this paper provides a new comparative model of individualist theories of social ontology along three lines of conceptual inclusivity. This model develops Epstein’s suggestion of three forms of counter-exampling which are likely to debase individualist models (“Replies to Guala and Gallotti,” 2016), highlighting the theoretical commitments conducive to such defeat common to some individualist theories. Given the bases upon which such theories are available to Epstein’s critique, I show that some individualist anchor theories will survive such counter-exampling through the inclusivity of their conception of how frame principles are set-up in the first place, and the variety of types of facts which underwrite such processes. On the exclusive end of this model are anchor theories which (1) take a homogeneous view of possible types of individualist facts constitutive of social facts (admitting as few as one type, e.g. facts about certain mental states like beliefs); (2) adopt a specific thesis about anchoring processes (that there is just one way that frame principles are metaphysically anchored, which they specify); and, (3) are demanding with respect to the capacities required to participate in the social construction of frame principles (e.g. cognitive or other capacities which restrict facts relevant to (1) and (2) to facts about a limited set of adult human beings). On the inclusive end of the model are anchor theories which (1) take a broad view of the possible types of individualist facts which anchor frame principles (e.g. admitting of multiple types of facts about mental states, social practices, tacit conventions, and/or others); (2) only adopt a generic thesis (Epstein 2015) of anchor individualism with respect to anchoring processes (that frame principles are anchored by facts about individuals, but without committing to an account of how this occurs for all frame principles); and, (3) are inclusive with respect to the types and content of anchoring facts in order that facts about individuals of varied capacities are admissible as anchors. Given this spectrum, I identify the cascading negative impact of commitment to a homogeneous view of anchoring facts in (1) (e.g. we-attitudes about the relevant frame principle, as is the case for Searle), and additionally argue that more inclusive models are sufficiently flexible to accommodate some of Epstein’s interest in de-prioritizing the role human agents in the study of sociality and social construction. As inclusive models avoid the pitfalls of individualist social ontology Epstein’s published worries often address, the spectrum model also highlights a direction for further research: that the anchor pluralist project (characterized by Epstein’s positive view) must attend to debasing inclusive forms of anchor individualism and that it must do so by demonstrating the presence of non-individualist facts in the metaphysical anchors of at least some social facts.

14:00-17:00 Session S80: Perspectives in Continental Philosophy of Religion
Location: BUCH D213
On the Relationship between Divinity and Materiality in the Continental Tradition
But where the danger is, also grows the saving power: Religion as Cause and Cure of Environmental Crisis
Paul Ricoeur's Journey to Ethics and Justice
Auto-Affection and the Question of God
14:00-17:00 Session S81: Contractualism and the Market

The relationship between Scanlonian contractualism and markets has been neglected, and further research promises to shed light on both contractualism and on the ethics of markets and their participants.

The contractualist framework introduced by Scanlon in his (1982) and worked out in his (1998) and (2008) understands moral principles as the outcome of an agreement between those who are motivated to seek governing principles that none who are similarly motivated could reasonably reject. The social contract described in this way is not the equilibrium point of bargaining between rational self-interested agents, as in the Hobbesian tradition. It instead pursues the Kantian thought that morality consists in the universal laws that would be legislated in a ‘kingdom of ends.’ But Scanlon’s contractualism is distinctive in emphasizing that the principles established by such an agreement characterize a special moral relationship of mutual recognition. Thus Scanlonian contractualism should be understood as an attempt to capture the relational dimension of morality, or ‘what we owe to each other.’ In this way it potentially complements, or stands as an alternative to, accounts of non-domination and relational equality proposed by philosophers such as Elizabeth Anderson, Niko Kolodny, Debra Satz, Samuel Scheffler, and Seana Shiffrin.

While some of these latter relational theorists—especially Anderson (2017) and Satz (2010)—have explicitly explored the moral dimensions of the marketplace, there has been little focus on these issues by contractualists. Consider Scanlon’s much dogeared (1998, 229–42) discussion of aggregation, in which he attempts to explain why certain situations require prioritizing the complaint of an individual over a sum of complaints of the many. This foregrounding of personal complaints is ambiguous when assessing the moral status of market exchange: does it undermine the moral importance of the welfare effects of markets, or does it strengthen their claim to protect individual rights related to negative freedom and self-ownership. Is contractualism optimistic or skeptical about the moral status of markets? Or does the question highlight an emptiness at the heart of the theory?


Julian Jonker, Assistant Professor of Legal Studies and Business Ethics, The Wharton School of the University of Pennsylvania
Nina Windgätter, Assistant Professor of Philosophy, University of New Hampshire
David Silver, Associate Professor of Business and Director of the W Maurice Young Center for Applied Ethics, University of British Columbia

Location: BUCH D304
Who is Wronged When Markets Fail?

ABSTRACT. Market teleology derives moral principles for market participants from the conditions of perfect competition. There are prominent examples in the literature. Christopher McMahon characterizes the implicit morality of the market as those principles generated by economic theory when economic efficiency is taken as an end. Joseph Heath develops a market failures approach to business ethics which prohibits market participants from taking advantage of failures of the conditions of perfect competition. Alan Wertheimer describes an exchange as exploitative when a seller commands a price that could not be set in a hypothetical competitive market. Teleological theories of market ethics suggest a particular diagnosis of market wrongs like exploitation, collusion rent-seeking, insider trading, and unfair competition: such practices undermine the welfare effects and virtues of well-functioning markets.

But market teleology fails to explain our judgments that unethical market practices not only are wrong, but wrong particular others. For example, collusion wrongs customers who must pay higher prices; unfair competitive practices harms competitors; and insider trading harms market participants who would have made different trades if the private information had been made public. It is tempting to assimilate these markets wrongs to traditional moral wrongs like deception, but any such attempt will be unsuccessful if it doesn’t account for the distinctive character of marketplace relationships between competitors and between buyer and seller.

Market teleology can acknowledge the relational character of market wrongs once markets are valued for the right reasons. Under conditions of perfect competition, the relationship between buyer and seller is one of non-dominance; and the relationship between competitors is one of fair play. Well-functioning markets are valuable primarily because they are constitutive of these valuable relationships, and only secondarily because they produce economic efficiency and cultivate the virtues amongst market participants such as creativity and thriftiness. Unethical


market practices are wrong not because they undermine efficiency or marketplace virtues, but because they impair valuable ways of relating. Valuable ways of relating are constituted by relational interests i.e. powers and expectations that the relatives have with respect to each other. So an unethical market practice that impairs a relationship infringes the powers or undermines the expectations that are constitutive of the relationship. In this way, an unethical market practice is not only wrong because it impairs a relationship, it wrongs the relative whose relational interests are set back.

This conception of market teleology also suggests a new way of thinking about the limits of the market. Markets are inappropriate not where they crowd out or debase the value of the thing subjected to market forces, but where they undermine those very forms of relating that are the valuable end of a well-functioning market.

Contractualist Insights on Employment Relations: The Unjustly Constraining World of Online Content Moderation

ABSTRACT. Workers at online content moderation firms are tasked with looking at content that is violent, sexualized, hate speech, and or has some other objectionable features flagged by users. While it is true that workers voluntarily sign these contracts, they do so under unjust conditions that, among other unjust things, create epistemic dysfunction that undermines rational decision making. Once workers take these jobs, the break structure, high-pressure decision making context, nondisclosure agreements, and trauma-inducing material they view exacerbate background epistemic dysfunction, and harm workers in their capacities as knowers.

In this paper, I argue that contractualism helps us gain insight into what is going wrong in these workplaces and with the job market that makes content moderation an attractive option for workers, as well as how it can give helpful guidance on corrective measures that we ought to make in the short and long term. I focus on the ways in which workers are asymmetrically constrained vis-à-vis employers, and how the work at the firms further constrains workers as knowers in ways to which they ought reasonable object. The injustices of the job market also intersect with injustices due to social group categories such as race, gender, and class in ways that exacerbate background social injustice.

I argue that contractualism provides us with an account of interests that workers have as knowers—such as making accurate credibility assessments, having shared hermeneutical resources they need to make sense of and articulate to others their experience, in having access to epistemic resources, and, more generally, in developing and maintaining epistemic agency and participating as an equal in knowledge production and exchange. One of the problems with the low-wage job market is that it is predicated on and exacerbates injustices that undermine these workers’ epistemic interests. Content moderating firms stand out for actively undermining their workers’ interests in ways that cause material, symbolic, and emotional harm in addition to epistemic harm.

I argue that seeing this injustice shows the methodological importance of starting with dysfunction, as opposed to starting with the assumption that background conditions are just. Much of the injustice in these employment situations happens in the background. My analysis thereby also goes against a common objection to contractualism that it is too highly voluntaristic, by focusing on the background features of employment markets that constrain workers in unreasonable ways.

Contractualism, Meaningful Work and the Purpose of the Firm

ABSTRACT. There are many ways that work has the potential to be meaningful. In this paper I focus on work that is “intrinsically meaningful”, which I define as work that aims to benefit the people who use the resulting products or services. Work at a company that aims to maximize profits for shareholders cannot have this kind of meaning, even if its strategy for making profits revolves around creating products that benefit its customers.

In this paper I develop an argument stemming from the Kantian social contract tradition that individuals have a “moral interest” in work that is intrinsically meaningful. This argument is based on a conception of the human person in which we are most actualized when we fill our lives with meaningful activities and relationships in which we act for the sake of other persons (or other sources of objective value). With this conception of the human person in place, the contractualist framework entails that we should structure work in society so that each person has access to intrinsically meaningful work, unless there are other moral interests that are undermined by this way of arranging work. This means that, other things being equal, we should understand the normative purpose of the firm to be the creation of some valuable product or service rather than the maximization of profits.

The bulk of the paper wrestles with two theoretical difficulties for my view. First, the “meaning seeking” conception of the person I rely upon conflicts with the more individualistic conception of the person that has historically characterized contractualist moral thought. While Marxists and feminists have criticized this “de-socialized” conception of the person, Hampton has defended its use as a device for freeing people from having their social natures used against them when we exercise our moral imaginations. I argue however that contractualism can adopt the fuller, more social conception of the person without losing the ability to object to having people’s social natures being used against them.

The second theoretical difficulty concerns the extent that the social contract metaphor is compatible with this more socialized conception of the person. One virtue of social contract theories is that they offer insight into the relationship between rational and moral agency. The worry is twofold. The first worry is that the social conception of the person is already “moralized” and thus does not provide the desired insight between rationality and morality. The second worry holds that it is a deep, but contingent fact about human beings that we desire meaningful connections with others, but it is not part of our rational nature as such. I respond (again following Hampton here) that rationality should be cashed out in terms of the pursuit of objective value, and that there is indeed a rational requirement to fill our lives with meaningful activities and relationships.

14:00-17:00 Session S83: Histoire de la Logique et Philosophie des Mathématiques/History of Logic & Math; Argumentation
Location: BUCH D221
How We've Misconstrued Medieval Modal Logic (and What to Do about It)
DISCUSSANT: Bryson Brown

ABSTRACT. There have been many attempts to equate the influential modal logic of John Buridan (†1358) with modern modal systems like T or S5. More recently, commentators have undertaken to furnish a Kripke-style possible-worlds semantics for Buridan’s system. Unfortunately, any such undertaking inevitably severely distorts the axioms of Buridan’s system (or even excludes some of them altogether), as well —of course— as later systems based on his. On such systems, modal propositions range not over possible worlds but possible objects. This apparently minor difference has major ramifications, since it allows Buridan to give more fine-grained axioms for modal propositions of the traditional Aristotelian (A, E, I and O) sort than can be furnished on any consistent set of conditions on frames. This paper demonstrates as much, by showing how two of Buridan's axioms in particular presuppose both reflexivity and non-reflexivity in a single condition on frames.

Buridan’s modal logic, while consistent, is thus not equivalent any modern system constructed in Kripke-style semantics. Much recent work on this important aspect of medieval logic will, therefore, have to be reconsidered. This paper lays some groundwork for that project of reconsideration, and suggests some alternative methodological approaches for new research in the field.

Nombres et figures, où êtes-vous? La critique aristotélicienne de la séparation platonicienne des objets mathématiques

ABSTRACT. Dans les livres M et N de la Métaphysique, Aristote critique la doctrine de Platon suivant laquelle les objets mathématiques sont des substances séparées. Après une réfutation par l’absurde de cette thèse et d’une présentation du problème qu’elle prétend résoudre – celui de l’existence des objets mathématiques, qui ne semblent pas être des objets sensibles – Aristote entreprend de la réfuter positivement en exposant sa propre solution : les objets mathématiques sont des quantités abstraites de la matière sensible par la pensée ; si donc ils existent séparément du sensible, c’est en tant que produits d’une abstraction, non en tant que substances. Mon exposé expliquera en détail cette solution d’Aristote, discutera en quoi elle résout le problème de l’existence des choses mathématiques en différant essentiellement de la thèse de Platon et en évitant les thèses qu’il lui reproche. En première partie, j’expliquerai ce que signifie la proposition d’Aristote que les objets mathématiques sont des quantités séparées de la matière sensible. J’expliquerai comment, en ne considérant que la quantité des choses sensibles, l’esprit peut construire par mode d’abstraction les nombres et les figures mathématiques. J’expliquerai aussi ce qu’est la matière sensible, de quoi Aristote comprend que les objets mathématiques sont abstraits, et ce qu’est en comparaison ce qu’il appelle la matière intelligible, que les objets mathématiques retiennent néanmoins comme sujet. La deuxième partie examinera la nature de l’abstraction mathématique en tant que telle. Notamment, il s’agira de distinguer cette abstraction d’autres types d’abstraction considérés ailleurs par Aristote. Il s’agira aussi de comprendre pourquoi la quantité peut être abstraite du sensible, dans quelle faculté de l’âme existent les objets mathématiques et pourquoi Aristote appelle « intelligible » la matière des choses mathématiques. La troisième partie se penchera sur le problème de l’être et de la vérité des objets mathématiques, difficulté directement soulevée par la conception développée par Aristote. Si les choses mathématiques sont, comme la solution d’Aristote l’implique, des constructions de l’esprit et que par ailleurs, la science ne peut porter sur le non-être, en quel sens peut-on dire que les définitions et les propositions mathématiques sont vraies ? Aristote soutient que les objets mathématiques existent en puissance, sous un mode particulier. On tentera de comprendre cette explication. La conclusion soulignera la critique générale qu’Aristote fait à Platon et qui émane de sa propre solution : Platon n’a pas distingué entre antériorité logique et antériorité substantielle. Je soutiendrai que c’est l’absence d’une telle distinction, qui, en ayant conduit Platon à poser une exacte proportionnalité entre l’être, la vérité la certitude, est à l’origine de sa théorie des Idées.

The “appeal to popularity” should not be treated as a fallacy.

ABSTRACT. It is common to view appeals to popularity (P is believed by everyone, So P is likely to be true) as fallacious. We argue that this is a mistake and that Condorcet’s jury theorem can be used to justify at least some appeals to popularity as legitimate inferences. More importantly, the conditions for the successful application of Condorcet’s theorem (binary claim, competent judge, epistemic independence) can be used as critical tools when evaluating appeals to popularity.

14:00-17:00 Session S85: History of Philosophy 4
Location: BUCH D216
The Cartesian Other: Intersubjectivity in Descartes

ABSTRACT. The misconception of the Cartesian subject as solely solipsistic has a strong hold on the philosophical tradition and is based predominately on a reading of the "Meditations" that ignores Descartes' "The Passions of the Soul" and his correspondence with Princess Elisabeth of Bohemia. Alongside the ideas of many of the scholars who argue for the existence of a more complex Cartesian subject, this essay will put forward a robust idea of Descartes’ subject as inherently social on both the normative and psychological levels of Cartesian thought. This assertion will be based on three major considerations: the structure and style of Descartes’ own writing; the générosité of the ideal Cartesian person; and the use of language as a means of deciphering and indeed establishing rationality. Together, these three points set out a rich, but often overlooked, intersubjectivity in Descartes’ work.

Religion Founded in Skepticism: Understanding Hume’s Dialogues though the Lens of the Treatise

ABSTRACT. The pursuit of knowledge, on a Humean picture, must be built on a methodological foundation of “skeptical considerations on the uncertainty and narrow limits of reason” (D 1.8). Naturally, this applies to the pursuit of religious knowledge for Hume, if it turns out that such knowledge is possible. Where does the approach of moderate skepticism ultimately land Hume on questions about God? Scholarship on Hume’s religious philosophy has been remarkably diverse in interpretations and characterizations of his views on God, religion, and the possibility of any kind of justified, positive religious belief. In this paper we take as our concern his considered view on natural religion in particular, making Hume’s Dialogues Concerning Natural Religion the focus of our interpretive project.

We aim to offer a reading of the Dialogues that takes seriously the framing that Hume offers his work in terms of both the dialogue form that the work takes (particularly as characterized by Hume himself in the introductory letter that opens the dialogue) and the methodological approach of skepticism as applied to religion (particularly as suggested by Demea and Philo in Part One). To this end we argue that identifying Hume’s voice in the Dialogues with only one speaker is misguided, because Hume is using this form to explore religion from skeptical, empirical, and naturalistic angles, and all three voices are used to this end. Our main interpretive move will be to suggest a parallel between the final section of the Dialogues and the final section of Book I of the Treatise, which we argue sheds light on Philo’s notorious reversal in Part XII of the Dialogues. Whereas the Treatise offers us explicit epistemology, through which we are meant to understand some important application to traditional religious systems, the Dialogues is an explicit treatment of religious systems of thought, but an indirect thesis on belief formation more generally. Thus we should look for the conclusion of the Dialogues to echo the epistemic conclusion of the Treatise.

We first look at Hume’s own framing of the dialogue, with particular consideration of his justification for use of the dialogue form, and the discussion of the role of skepticism in a religious education. We then propose a reading of the Dialogues, in which Hume’s voice is not located in a single character, but rather the Humean take on natural religion is gathered from the interaction and even tensions of the perspectives presented. We conclude by suggesting a reading of Philo’s reversal that aligns Hume’s resolve of the Dialogues with his resolve of Book I of the Treatise. This resolve, we argue, leaves us with a Hume whose considered view on the possibility of religious belief is not that of the straight-forward atheist, but rather leaves room for the possibility of a kind of naturalistic and justified belief in divinity. Such an approach to Hume we hope contributes to the project of working out a positive contribution to the philosophy of religion in Hume, as has been recently suggested by Andre Willis.

Peirce, Empiricism, and the Pragmatic Maxim

ABSTRACT. It is hardly novel to read Charles S. Peirce’s pragmatic maxim as constituting an empiricist constraint on objective meaning akin to the logical empiricists’ verificationist criterion of meaning. Recently, however, some prominent interpreters of Peirce have called the comparison into question. I reply to their misgivings, first showing how Peirce uses empiricist premises to ground his early empiricist formulation of the maxim, and then arguing that even in his later writings, where he often formulates and argues for the maxim differently⎯and where he even seems at times to minimize how central the maxim is to meaning⎯Peirce remains committed to a reading of the maxim on which the objective meanings of our statements are exhausted by their implications for experience, as well as to deploying the maxim to rule out supraempirical metaphysical statements as lacking sense.