In this book, I explore similarities and differences between morality and mathematics, realistically conceived. I argue that our mathematical beliefs have no better claim to being self-evident or provable than our moral beliefs. Nor do our mathematical beliefs have better claim to being empirically justified than our moral beliefs. It is also incorrect that reflection on the “genealogy” of our moral beliefs establishes a lack of parity between the cases. In general, if one is a moral antirealist on the basis of epistemological considerations, then one ought to be a mathematical antirealist as well. And, yet, moral realism and mathematical realism do not stand or fall together. Moral questions -- or, at least, the practical ones stake in moral debate -- are objective in a sense that mathematical questions are not, and the sense in which they are objective can only be explained by assuming anti-realism. It follows that the concepts of realism and objectivity, which are widely identified, are actually in tension. I conclude that the objective questions in the neighborhood of questions of logic, modality, grounding, nature, and more are practical questions as well. Practical philosophy should, therefore, take center stage.
ABSTRACT. Malebranche holds that the visual experience of size is restricted to representing objects as bigger or smaller than the perceiver. This paper reconstructs two arguments for this thesis. The first argument turns on a thought experiment comparing our visual experience of size to the experiences had by the denizens of miniature and giant worlds, while the second depends on the biological function of the senses. In addition to reconstructing these arguments, this paper draws out an important implication of this thesis: namely, that visual experience implicitly represents the perceiver as having dimensions, and, hence, as an extended thing.
ABSTRACT. Readers of Mary Astell may easily overlook the role of the body in her account of virtue. Indeed, when asking about Astell’s views of the body and embodied agency, one’s thoughts are liable to be carried to her criticisms of those who inordinately value the beauty of their body to the detriment of their minds. Yet, despite these initial associations, Astell clearly maintains that human beings are a union of mind and body, and that the due regulation of both, if not necessary, at least assists one in achieving those goods that Astell regards as ultimate ends, namely, wisdom, virtue, happiness, and salvation. While the first part of A Serious Proposal to the Ladies opens with the pointed criticisms about the body’s corruptibility, challenging the way women inappropriately value their bodies, the second part of the work adopts a more measured and nuanced evaluation of one’s relationship with one’s body. My goal in this paper is to articulate Astell’s account of what it is to exercise agency as an embodied creature, or, in Astell’s language, what government of the body consists in, with a particular focus on Astell’s account in A Serious Proposal. In so doing, I aim to elucidate Astell’s claim that, “the true and proper Pleasure of Human Nature consists in the exercise of that Dominion which the Soul has over the Body,” (Astell, 210). I will argue that Astell thinks that a certain amount of concern for the body is instrumentally valuable in cultivating virtue and promoting those ends that Astell regards as intrinsically valuable. By the end of the paper, it will be clear that Astell thinks properly valuing one’s body involves governing the body, or rather, properly exercising embodied agency.
Condillac on Being Human: Control and Reflection Reconsidered
ABSTRACT. Why do humans have reason, while animals do not? This question has long been answered by claiming that humans have rational souls, and because of this, an innate faculty of reason. Condillac breaks with this tradition by arguing that humans start to develop reason at precisely the moment at which they discover signs and learn to control their thoughts through these signs. Commentators like Hans Aarsleff and Charles Taylor believe that the discovery of signs is enabled through the presence of a special human capacity: the capacity to reflectively relate to what is given in experience. The problem with this interpretation is that it returns Condillac to a form of innatism from which he was keen to escape, for it assumes that human minds are reflective as a consequence of their original endowments. This paper sets out to offer an alternative interpretation that does not fall prey to the charge of innatism. It argues that for Condillac the capacity to reflect is not simply given, but arises as a result of contingent circumstances encountered in one’s experiences with others. This interpretation not only does justice to Condillac’s sustained effort to conceive of humans from the point of view of their embodied existence that, like in the case of other animals, is shaped by their interactions with the world. It also explains why many French enlightenment authors who were inspired by Condillac’s work defended the claim that the cultivation of reason requires a Lockean programme of experiential learning.
ABSTRACT. In Plato’s Symposium, it is not immediately clear what readers are to make of the apparent tension between the description of virtuous love in Socrates’ speech and the portrayal of corrupting and vicious love in Alcibiades’ speech. I will contend that interpretations of the Symposium should take seriously both the account of love that Alcibiades provides as well as the account of love in Socrates’ speech. I shall argue that Plato is trying to make a claim about the connection between the virtuous love described by Diotima and the vulgar love of Alcibiades and that for Plato love is dual-natured.
ABSTRACT. This paper explores the kinds of relationships with the truth that a speaker may have, making use of various Plato texts and Harry Frankfurt's 'On Bullshit'. I ask whether a sophist (as Plato conceives them) falls under Frankfurt's definitions of liars or bullshitters, and further explore Plato's various forms of lying to see if they correspond with Frankfurt's theory and the sophist as well. It turns out that the sophist's relationship with the truth is more complicated than Frankfurt's theory can account for.
Tomb and Prison: Plato on the Body as the Cause of Psychic Disorders
ABSTRACT. The Timaeus’ discussion of psychic illness begins with “whereas the diseases of the body happen to come about in the way just described, the diseases of the soul come about through the body in the following way” (86b). The Greek text is ambiguous, however, and no English-language translator agrees that this what the text says. They all (e.g., Jowett 1875, Lamb 1925, Zeyl 2000, and Waterfield and Greggory 2009) render it as something like ‘those diseases of the soul that are due to the body arise in the following way’, which suggests that not all psychic disorders are caused by the body. (Cornford 1937 translates the text in the same way that I do, but argues anyways that "not all mental disorders are solely due to bodily states".) In this paper, I shall argue that Plato believes that the body is the sole cause of psychic disorders.
The motivation for the more popular translation is, overwhelmingly, that it does not seem to agree with what Plato says elsewhere that the body be the cause of all psychic disorders; plus, it might be seen as an implausible philosophical view in the first place. I begin by arguing that the view I see at 86b is consistent with the theories of other dialogues, such as the Phaedo: there, we learn that the soul itself according to itself contemplates the Forms (79d), and that this activity is contrasted with what the soul does according to the body (94b). The Timaeus is invented to tell the reader how the body disrupts the soul’s activity. It is also important for the theory of reincarnation, endorsed in the Timaeus, Phaedo, and other dialogues, that the body impede the soul’s function.
The Timaeus explains all psychic disorders in terms of a disruption of the motions of the circles of the same and different that make up the rational kind of soul. The most pressing dangers are the vapors produced by bile and phlegm, and the violent streams of sensation, including sense-perceptual pleasure, and nutrition. What we begin to see as we enumerate the causes of psychic disorders is that there is no room for or reason to believe that Plato thinks there are some extra-bodily causes. The contrast between the world-soul and the human soul relies on there being nothing outside the world that can impinge on the world-soul’s motions. This suggests that the reason our souls get disordered, whereas the world-soul is ceaselessly intelligent, is that there are external causes that impinge on our soul, and Plato’s list of those at the end of the Timaeus is exhaustive.
I shall conclude by considering an alleged counter-example: Plato also says that bad, or non-existent, education harms the soul. I shall argue that Plato envisions education as corrective of the damage done to the soul by bodily forces: bad education ruins the soul in that it keeps the soul ruined, not that it is a cause of particular disorders.
Pain as the Scaffold of the Lifeworld: Some Reflections on Pāli Buddhist Philosophy
ABSTRACT. In this paper, I explore how our experience of pain and suffering structures our salience map of the world. The Indian Buddhist philosophers have strong views on this issue because according to them, it is our collective and suffering (dukkha)-afflicted intentions (cetanā) that constitute the boundaries of conditioned existence (saṃsāra). From a contemporary standpoint, I analyze how these kinds of transcendental views on pain and suffering – as constructing the boundaries of the human lifeworld – might interact with Klein's (2015) homeostatic imperativism about pain. I am deeply sympathetic to the view that pain is a homeostatic affect (rather than a sensory or emotional affect), but am skeptical of the view that the content of the pain imperative tells us nothing about what the world is like, this latter claim being one of the entailments of pure imperativism about pain. Instead, I argue that if a pain imperative tells us something about us as the subjects experiencing the pain as embodied subjects, then we also learn something about the world, about its affordance structure by being related to it painfully.
ABSTRACT. In this presentation I use a model of concept formation drawn from the Indian Buddhist philosopher Dharmakīrti to sketch a conception of nonconceptual experience that is novel to the contemporary conceptualist versus nonconceptualist debates in the philosophy of mind. I apply the conception to the experience of physical pain and explore its implications for cognitive science using recent behavioural and neuroscientific studies of the effects of Buddhist meditation practices on the perception and experience of pain.
ABSTRACT. This paper engages Klein and Scarry on the relationship between pain and selfhood from a cross- cultural perspective. I bring in the works of the 10th-11th century Śaiva polymath Abhinavagupta to interrogate the link between pain and the making and breaking of one’s sense of self. Although Abhinavagupta draws heavily on Buddhist theories of how determinate experiences are created, for him interrogating one’s pain reveals not the lack of an essential self, but rather the ways in which one’s conventional sense of self—of myself as a subject set apart from everything else—is but a sliver of an infinitely larger field of consciousness that contains the potential for any and all experiences. Delving into the experience of pain can lead to a dissolution of one’s sense of oneself as a limited subject by revealing the presence of joy even in the midst of pain. I argue that Klein’s homeostatic theory of pain, by which pain issues an imperative to preserve the integrity of one’s embodied self, provides a framework for understanding why pain and selfhood are so intertwined. Scarry’s exploration of the role of intentionally inflicted pain in destroying the self indicates how pain’s imperative to maintain one’s selfhood can be inverted by the structure of torture. I argue that both of these theorists powerfully articulate the role of pain in making and breaking the self, but that they mis-value pain by seeing maintaining the self as good, and breaking the self as bad. Just as preserving one’s sense of self can intensify suffering by locking one within an all-encompassing experience of pain, dissolving the self can be positively transformative depending on the context of the intervention.
ABSTRACT. Motivated by Noam Chomsky’s “rocks and kittens argument,” I argue that whatever some meanings are, they appear to have a massive prelinguistic dimension. I begin by addressing Michael Dummett’s questions regarding the possibility of theories of meaning by suggesting that we do all have, minimally speaking, a sense of what meanings are, which justifies the search for a theory. I also propose that a theory of meaning that relates lexical concepts with internal representations of the sort internalists like Chomsky, James McGilvray, and Paul Pietroski posit, is in line with recent studies on infants and nonhuman animals.
ABSTRACT. This paper argues that even artifactual terms are covered by an externalist semantics, and that content externalism does not require a well-worked scientific account of a certain class of objects. Instead of supposing that we require a science of artifacts to have an externalist account of artifactual terms, this paper argues that we only need to have a certain story about the sorts of counterfactual judgments into which such terms may enter.
The upshot is that there is no principled distinction between singular terms that refer to artifacts and singular terms that refer to natural kinds. By way of argument against Jerry Fodor's pessimism about such an extension of externalist semantics we can also show that there is little to support the view that so-called "narrow content" is immune to content externalist arguments.
ABSTRACT. Animalism, the view that human persons are identical to human animals, is typically taken to conflict with the intuition that a human person would follow her functioning cerebrum were it to be transplanted into another living human body. Some animalists, however, have recently called into question the incompatibility between animalism and the “Transplant Intuition”, arguing that a human animal would be relocated with her transplanted cerebrum. In this paper, we consider the prospects for this cerebrum transplant-compatible variant of animalism, which we call “Transplanimalism”. After presenting its account of three related thought experiments and outlining its advantages over Standard Animalism, we raise a final worry for Transplanimalism. Here we introduce a new thought experiment that pushes Transplanimalism into surprisingly counterintuitive results, results not shared by Standard Animalism. As a result of this thought experiment, Transplanimalism’s superiority over Standard Animalism becomes less clear.
Is totalitarianism theory a useful tool of historical scholarship? On the relation between historiography and social science theories
ABSTRACT. Harold Kincaid and Aviezer Tucker propose two divergent positions on the relation between historiography, history and social science theories. Based on his distinction between historiographical interpretation and scientific historiography, and the paradigmatic role of interpreting surviving documents as evidence, Tucker argues that scientific historiography seldom needs social theory for explaining present traces by past events that caused them (2004, 2007). I agree with Kincaid that Tucker’s position invites many interesting counter-examples (2009), and, motivated in part by Tucker’s recent theory of economic and social phenomena in post-communist Eastern Europe (2015), I put his position to test by focusing on the relation between totalitarianism theory and the recent historiography of Eastern European communist societies. I contrast Tucker’s multi-layered position on historiography to those emerging in the debates concerning the explanatory role of totalitarianism as an ideal type, and to which M. Fulbrook (1997, 2011a), J. Kocka (1999, 2013) and K. Jarausch (1999) have made important contributions. The contrast is illustrated vividly by Tucker’s, respectively, Fulbrook’s, positions. While Tucker insists that “from an external, philosophic-epistemic, perspective not much has changed in historiography since the introduction of the Rankean paradigm” (2007), Fulbrook stresses the various perspectives on individual agency in contemporary historiography of world history events, such as the Holocaust, or the collapse of Eastern European communist system (2005, 2011a). While Fulbrook presents the totalitarian model as a methodological dead-end for future students of communist societies (2011b), Tucker proposes a new version of totalitarianism as a theoretical framework intended to explain economic and social phenomena he identifies as shared by most post-communist European societies.
I argue that Tucker’s new model of totalitarianism applied to the evolution of post-communist societies, such as those of Poland or Romania, does not have enough conceptual resources to respond to the type of critique Fulbrook and her colleagues provide for the application of its previous versions to historiography. Despite its intuitive appeal when it comes to discriminating between the political regimes of Spain or Greece against those of the countries of Eastern Europe, his concept of late-totalitarianism falls short of the requirement for empirical adequacy, e.g., when it comes to historiographical analyses of the university system across Eastern Europe (Sadlack 1991, Connelly 1999). Tucker’s explanation of the changes in the system of property rights in terms of Pettit’s 1996 notion of freedom as antipower does not clarify on what empirical sources it is premised, and thus does not allow for contrast to Fullbrook’s or Kocka’s notions of individual agency during communism. Tucker’s model seems introduced to denounce and blame non-democratic regimes rather than explain the conditions under which, arguably, the former communist elites transformed their alleged political power into property rights, or help understand the limits of such transformation. Conversely, the conceptual resources social historians introduce in order to identify new historiographical sources, and to accommodate the empirical evidence (modern dictatorship, mass dictatorship, history of the everyday life, history of subjectivities) come into conflict with Tucker’s Rankean (static) model for history as a discipline.
Taking responsibility for injustices at the global level: from the right to justification to epistemic accountability.
ABSTRACT. Following a relational egalitarian approach to the global context, wherever relation of political rule and social cooperation exist, all the agents subjected to these relations should relate to one another as equals regardless of whether or not they belong to the same political community. As such, to promote just transnational relations we need more than a simple redistribution of goods to address global injustices. We also need to ensure that the common binding norms that regulate relations between agents (individuals or political communities) are justifiable to all those who are subjected to them. In this talk, it will be discussed how this relational egalitarian approach to the global context not only informs us about the principles that should guide transnational relations but that it also informs us about how a particular agent should take responsibility when confronted with an unjust situation.
The argument proceeds in two steps. First, this right to justification will be discussed. This right is notably defended by Rainer Forst. For him, human beings possess a particular moral status considering that they are rational beings who possess the faculty of being able to answer for their beliefs and actions and who can, consequently, provide and demand justifications. Consequently, the norms that should govern social interactions should be those that are reciprocally and universally justified. A reciprocal justification is reached, ideally, when all individuals are able to recognize themselves as being co-creators of the norms that bind them together.
Second, while Forst focuses on the structural conditions necessary to equalize social and political relations to actualize this right, it will be argued that the right to justification also grounds a form of epistemic accountability. When we are confronted with injustices at the global level we should not simply aim to provide justifications and develop minimally inclusive institutions, we should also recognize that we are accountable to the perspectives of the individuals and communities that are subjected to unjustifiable norms or who are marginalized in a given political or social context. The perspectives of those who suffer from unjustifiable norms and unjust political and social contexts have preeminent importance when we aim to correct these situations: it is their story, properly told, that will be decisive in finding out what justice demands.
Consequently, taking responsibility accountably in the global context has at least three dimensions: 1) agents should be accountable to all those with whom they share a common scheme of cooperation or who are affected by their actions and inactions; 2) agents should be accountable for building political relationships “that bring marginalized people into political equality and undermine the exploitation of political inequalities”; and 3) agents should be accountable for the way they take responsibility for a given injustice. More precisely, the right to justification entails that agents should address global inequalities from the perspectives and learnings of those who are subjected to unjustifiable global norms.
ABSTRACT. Rawls’s (1999) difference principle captures the intuition that, sometimes, paying the talented a wage premium can be in the interest of the least advantaged members of society. The premium incentivizes the talented to contribute more to society in ways that benefit others. Rawls emphasizes that justice thus conceived is compatible with economic efficiency, since it merely selects one of many Pareto optimal outcomes.
A powerful critique of the difference principle has been put forward by Cohen (2008). He argues that incentive payments are unjust, because the individual who receives them could unilaterally decide to do without. Under the Rawlsian assumption of a well-ordered society, where members have internalized the difference principle and its egalitarian ethos, it is inconsistent for individuals to hold out for incentive payments. While Cohen argues that incentive payments sanctioned by the difference principle are unjust, he concedes that it might be “expedient” or, in other words, efficient, to pay them. Thus, contrary to Rawls, he believes that justice and efficiency do conflict.
This paper agrees with Rawls that justice and efficiency are compatible, but also agrees with Cohen that incentive payments are unjust. How so? The standard conception of efficiency in economics is static, that is, it analyses economic affairs given a number of parameters. These parameters include the labour supply preferences of individuals. I have argued elsewhere that taking these preferences as a given is a mistake (Dietsch 2018). Here, I expand on this argument in the context of incentive payments by putting forward a dynamic conception of efficiency.
The effect of a reduction in wages on labour supply depends on the baseline we use for this assessment. If we ask whether the neurosurgeon who earns $500 000 under the status quo would work less if paid “only” $400 000, the answer may well be yes. However, if we ask whether the medical student and potential neurosurgeon would change her career choice and future work effort if her potential earnings down the line were $400 000, while the ratio to salaries in other available professions stayed the same compared to the status quo, it seems plausible that the answer is no. In other words, soliciting the same labour supply is possible with a more compressed wage schedule. Thus, with Cohen, incentive payments are unjust, but, pace Cohen, on a dynamic conception of efficiency abolishing them does not conflict with efficiency.
One important objection to this argument states that while such a more compressed wage schedule might be possible in principle, it is not feasible in our society here and now. The paper discusses two possible responses. First, given empirical studies showing the labour-supply of high earners to be relatively inelastic with regard to wages (Saez et al. 2012), one might conclude that, even under static conditions, the practical need for incentive payments is often overstated. Second, gradual changes in income distribution, or in the progressivity of the tax schedule, can increase the feasibility of a more equitable income distribution over time.
ABSTRACT. As recently raised by both Jennifer Frey and Matthias Haase, the work of Philippa Foot leaves unaddressed the question of how moral goodness—as a form of natural goodness—can be both objective and third personal and normatively generate good action when rationality involves the capacity for choice. Put in other terms, there exists a lacuna in the tradition in the form of an explanation about how claims about human form is involved in deliberation (euboulia) as the characteristic activity of a human, and therefore rational, life. In this paper, drawing heavily from Rosalind Hursthouse and Gavin Lawrence, I develop and advance the view that perfected practical wisdom entails a mastery of concepts that are involved in human life. That is, the correct understanding of a concept entails recognizing its role within a flourishing human life and so is necessarily related to a grasp of human form. I then argue that such a concept mastery involves the acquisition of empeiria or experience, which is necessarily understood and grasped in the first person. Consequently, claims about human form generate action because, for us, they are grasped from within a human life. That is, claims in the third person about human form are motivationally efficacious when they are apprehended in the first person—when the agent recognizes “humans do such and such” as “the life form that I bear does such and such”.
ABSTRACT. This paper argues against Ruth Chang’s voluntarist understanding of commitments. Instead, it argues for a non-voluntarist conception of commitments as attitudes that are temporally stable, reasons-sensitive, rationally optional, and function in deliberation in such a way that they restrict what actions and attitudes are rational for oneself.
According to Chang, our commitments towards things and people (e.g. committed relationships, commitments to certain vocations) are voluntary. They are voluntary, Chang argues, in the sense that committing is an act of will. Chang maintains that committing is required for cases of parity. Two options are on a par, Chang claims, when none is better than the other, they are not identical, yet they are not incommensurable. In cases of parity, the reasons in support of different options have “run out”, and so committing is the only way for making a rational choice. Committing, Chang says, is an act of throwing our will behind an option, and thereby giving oneself a reason. One thus bootstraps oneself into being rational.
The paper presents two challenges to Chang’s view. First, voluntarism seems to rest on a false dichotomy between action and passivity. While sometimes we may be said to perform mental actions (e.g. when choosing), this does not mean that mental states that are not action-states are passive. Beliefs, for instance, can be understood as active states (e.g. believing, or holding a belief). If this is the case, then perhaps commitments can also be understood as active states that are not action-states. A second objection is that if voluntarism is true, then commitments end up unable to serve an important function in our lives – that of rational restriction (i.e. restricting what actions and attitudes are rational for oneself). If voluntary commitments can fail to serve this function, then this counts against their being commitments at all.
The paper then turns to outline a non-voluntary conception of commitments. According to this conception, commitments are temporally stable, reasons-sensitive, rationality-restrictive, strongly held attitudes. Commitments must be temporally stable if they are to function in the right way in one’s deliberation. In addition, then must be reasons-sensitive, or they cannot be rational. They are restrictive in the sense that they place limits to what actions and attitudes are rational for oneself, and they also place rational requirements on one’s deliberation (this is opposed to Chang’s conception of commitments as reason-giving). Finally, the paper maintains that it is best to construe commitments as strongly held in the sense that the only state-given reasons (i.e. reasons that pertain to the goodness of the very having these attitudes, as such) that justify taking proper steps to revise them are reasons of coherence with other commitments. The paper outlines how such commitments satisfy all the requirements set out by Chang, and how this conception of commitments avoid the pitfalls of Chang’s view.
ABSTRACT. This work argues for a methodological principle, Explanatoriness, which holds that if a moral term admits of multiple meanings and/or admits of multiple definitions, a theory of the term that can account for insights in competing conceptions is, all-else-being-equal, preferable to alternatives. It argues that Explanatoriness is necessary to have a topic of moral inquiry (in many cases), assists theory choice, and helps ensure value theories are action-guiding. It further argues that Explanatoriness is supported by the reasons to conduct conceptual analysis in value theory, the norms of ordinary language philosophy, and the persistence and value of real-world contingencies.
Moral Pluralism, Disagreement, and Informed Consent
ABSTRACT. This paper explores conceptual issues in responses to moral pluralism and value-based disagreements in bioethics contexts. By "moral pluralism," I mean the idea that there can be multiple moral views or sets of moral beliefs that are, in some sense, equally valid, rational, or well-taken. Drawing on the work of Patricia Marino, Leigh Turner, and Samantha Hershey, I articulate two specific manifestations of pluralism; I then use these to critically analyze bioethics proposals of Donald Ainslie and Thomas Engelhardt.
In a 2015 book, Marino argues that in contexts of value pluralism -- in which we recognize multiple values such as benevolence, justice, honesty, liberty, and fidelity -- some disagreements arise because while we roughly share values, we direct and prioritize those values in different ways. In such contexts, this can lead to a kind of moral pluralism in which there can be multiple sets of moral beliefs that are internally coherent yet disagree with one another.
In a 2003 paper, Leigh Turner develops cross-cultural examples to argue that the social meaning of concepts like truth-telling and the prevention of suffering can differ widely from one community to another. These variations, reflecting different ways of valuing and making judgments, are also a manifestation of a kind of moral pluralism. Turner explains that in some world views, knowing the truth can itself be seen as harmful, and it is expected that others will bear the burden of decision-making on behalf of those who are ill or in pain.
In proposed responses to the fact of moral pluralism, Ainslie and Engelhardt each suggest expanding the range of cases in which we give people a right to choose for themselves. I argue that value-based and cross-cultural disagreements pose challenges for Ainslie and Engelhardt's accounts. Disagreements arising because of value pluralism can be disagreements about when, and how, the freedom to choose associated with informed consent is appropriate, so that informed consent can be, itself, reflect a value-imposition. The possibility of roughly sharing values while disagreeing about conclusions complicates Engelhardt's distinction between moral friends and moral strangers. And as Samantha Hershey has argued, Turner's examples suggest specific ways that the the truth-telling and choice associated with informed consent may involve the imposition of values.
I conclude that the challenges associated with moral pluralism transcend the solutions offered by an expansion of informed consent.
References:
Ainslie, Donald. "Bioethics and the Problem of Pluralism," Social Philosophy and Policy 19 (2002), 1-28.
Hershey, Samantha. "Ethical Pluralism and Informed Consent in Canadian Health Care: Exploring Accommodations and Limitations." Master's thesis, University of Waterloo, 2017.
Marino, Patricia. Moral Reasoning in a Pluralistic World. McGill-Queen's Press-MQUP, 2015.
Turner, Leigh. "Bioethics in a Multicultural World: Medicine and Morality in Pluralistic Settings)," Health Care Analysis 11 (2003), 99-117.
ABSTRACT. It seems that we can get deeply indignant about some cases of deceiving another person into sex, but not others. For example, impersonating someone’s spouse for sex seems egregious, while lying about where one went to university for sex seems less so. This difference may indicate that cases like the former involve a serious wrong while cases like the latter involve a far milder wrong. This indication is supported by the Lenient Thesis, which holds: “it is only a minor wrong to deceive another person into sex by misleading her or him about certain personal features such as natural hair colour, occupation, or romantic intentions” (Dougherty, 718). Accordingly, deceiving another person into sex by misleading that person about these run-of-the-mill personal features is only a minor wrong.
Contra the Lenient Thesis, Dougherty argues that even with run-of-the-mill deception, deceiving another person into sex is seriously wrong since deception about deal breaking features (run-of-the-mill or not) vitiates the consent for sex. Precisely, when the consenter “forms a false belief about a deal breaker” and when the deception conceals the presence of this deal breaker, her consent is vitiated (731). These deal breakers are completely up to each consenter to determine for herself and make a decisive difference in her decision to have sex. When misled about any deal breaking feature about the person to whom she is trying to give her consent, the consenter’s will is insufficiently involved in her agreement to have sex with that person for the sex to count as consensual (728). Dougherty says it follows from his view that “deceiving someone into sex would be in the same moral ballpark as having sex with an unconscious person” and then challenges, “if others wish to reject this rough moral equation, then I pass the challenge to them to find further moral differences” (743).
In this paper, I take on a version of this challenge by pointing out some interesting and genuine cases of deceiving someone into sex, in which either the deceived consenter validly consents to sex despite being deceived about the presence of a deal breaker, or the given consent is vitiated by deception that does not conceal a deal breaker. Particularly, I look at situations where someone plays no role or a nuanced one in deceiving the consenter into sex. I claim Dougherty’s account as it stands, misjudges whether the relevant consent in these situations is vitiated by deception and thus, whether someone has culpably deceived another into sex. These cases will motivate revising Dougherty's view into an novel account that, unlike Dougherty, claims (a): in many cases of deceiving someone into sex, the person acquiring consent simply does not vitiate the relevant consent with deception, while still, like Dougherty, maintains (b): deceiving another person into sex, even where the deception is about run-of-the-mill personal features, is indeed seriously wrong. This new account consequently refines how deception vitiates consent for sex and what counts as culpably deceiving someone into sex.
A Higher Standard of Sexual Consent: Linking Consent and Blame
ABSTRACT. What duty do we have to not deceive our sexual partners, and how far does this duty extend? This question has important implications for personal relationships and ethical obligations they entail. A strong obligation to avoid deception would modify current social practice by requiring potential sexual partners to tell the truth for consent to be valid. One such model of consent (Dougherty 2013; Bromwich and Millum 2018) suggests that deception always invalidates consent when it misrepresents material conditions of the sexual encounter – conditions about which if the consenter had known the truth she would not have consented. One problem with this approach, however, is that it separates consent and blame (Dougherty 2013). It is possible to have failed to get proper sexual consent, gone ahead with the sexual encounter, and still be blameless. In this paper, I argue that separating consent and blame results in a concept of consent in which the wrong is not located in the violation of consent itself, but rather in a different wrong. That is, separating consent and blame makes it difficult to say non-consensual action is wrong on its own. I argue that we want our concept of consent to track a primary wrong (that violating consent is wrong in itself) (Owens 2011; Ferzan 2016). To do so, sexual consent and blame must be conceptually connected. In this paper I propose modifying this account (Dougherty 2013; Bromwich and Millum 2018) in which deception always invalidates consent. I propose a view of consent in which deception invalidates consent when it is a failure of disclosure (Bromwich and Millum 2018). This modification shifts the obligation from the person giving consent to the person seeking consent and thus allows more room for blame. Finally, I suggest that the level of disclosure (and thus what one can be blamed for) tracks the nature of the type of consent. Since sexual consent is one of the most serious kinds of consent, it requires the highest level of disclosure. So, this paper offers a model of sexual consent on which we are obligated to not deceive potential sexual partners and on which failures to do so are blameworthy.
As Much as Possible, as Soon As Possible: Getting Negative About Emissions
ABSTRACT. This paper will be a report to the philosophical community on a growing policy debate in climate science about the viability, both technical and ethical, of so-called negative emissions. My major aim is to get philosophers who are interested in climate, sustainability and energy to think about this very pressing problem.
ABSTRACT. Immodest agents take their own epistemic standards to be among the most truth-conducive ones. Many philosophers have argued that immodesty is epistemically required of agents, notably because being modest entails a problematic kind of incoherence or self-distrust. In this paper, I argue that modesty is epistemically permitted in some social contexts. I focus on contexts where agents with limited cognitive capacities cooperate with each other.
ABSTRACT. Moral encroachment is the view that moral considerations can bear upon the epistemic standing of our beliefs (Basu, forthcoming (a); Basu & Schroeder, 2019; Fritz, 2017; Gardiner, 2018; Moss, 2018a; Pace, 2011). In particular, considerations that matter morally in a believer’s environment can impact how much evidence is required for her belief to be epistemically justified.
I will argue in this paper that the current picture of moral encroachment provided by the literature has two gaps that should be filled. First, it only recognizes moral considerations of a particular kind: those that center on harms to the belief-target. In doing so, it neglects to address whether other categories of moral consideration can also bear on the epistemic standing of our beliefs, such as considerations focused on the moral interests of third-party bystanders or even the believer herself. Second, it only holds that moral considerations can serve to increase the amount of evidence required for a belief to be justified. Whether moral considerations can also decrease how much evidence is needed for justified belief, however, remains almost entirely unaddressed. We will soon see that there is an important relation between these two considerations: if our model of moral encroachment recognizes a wider range of moral considerations that can serve to counteract each other, we will need an understanding of how the evidential threshold for justification can move not only up, but also down.
My purpose here is not ultimately to take a stance on moral encroachment – I leave that for a later discussion. In revealing and filling these two important omissions, the contribution of my paper is to provide an account of moral encroachment that reflects a more complete and nuanced picture of our moral landscapes. In doing this, I take myself to be helping moral encroachment fulfill its professed aim of showing how what’s morally at stake in a believer’s environment matters to the epistemic standing of her beliefs. This gap-filling is valuable work for proponents and critics of moral encroachment alike: it enables a better assessment of the full consequences of adopting the view.
My paper unfolds as follows. In section II, I unpack the central thesis of moral encroachment and show that the current picture has this restricted focus. I argue in section III that moral encroachment should not only be sensitive to belief-target focused considerations, but also to complex stakes involving believer and third-party centered considerations. I argue in section IV that, furthermore, moral encroachment should recognize not only that moral considerations can increase the amount of evidence required for justified belief, but also that moral considerations can decrease the burden of evidence for justified belief. Finally, in section V, I raise the question of whether moral benefits, and not just moral harms, can have a role to play in moral encroachment.
ABSTRACT. While we regularly attribute to ourselves obligations to know certain things, those obligations often look more like moral or professional or legal obligations, rather an epistemic ones. In general, discussions of epistemic obligations tend to focus on obligations with respect to believing or withholding belief, rather than with respect to knowing. In this paper, I argue that we can make sense of a category of obligations to know which are distinctively epistemic, and which don't reduce to mere obligations to believe. These are obligations to avoid what I call self-inflicted knowledge loss. Insofar as there are ways we can lose a piece of knowledge without losing the corresponding belief, and insofar as there are epistemic norms prohibiting the self-infliction of such knowledge loss, we sometimes have epistemic obligations to know. These are, in effect, obligations to remain knowledgeable, or to retain one’s knowledge in the face of possible knowledge-defeat.
Searching for culture: social construction across species
ABSTRACT. Do any non-human animals have culture? To find out, some scientists have attempted to isolate behaviours or information that are caused and spread by means other than genetic inheritance or ecological factors. However, cultural, genetic and ecological factors are not always isolatable since there is an entangled interplay between them, as in gene-culture co-evolution. The problem is exacerbated by disagreement on what counts as cultural. For example, some define culture in terms of behaviour patterns or information shared within communities via social transmission (e.g. Whitehead and Rendell 2015). Others add cognitive requirements, such as theory-of-mind, which some argue is uniquely human (e.g. Tomasello et al. 2005; Galef 2001). Still others define culture in terms of its human expressions, such as religious rituals, ethnic markers and politics (Hill 2009), thereby making it uniquely human. In ordinary use, culture is a vague term. For example, what constitutes Canadian culture? Does it exist?
I argue that the definitional problem of culture stems from its socially constructed ‘nature’. Cultures are real social kinds, which are socially constructed ideas or objects that depend on social practices for their existence. Importantly, their etiology does not make them any less real, or preclude them from causal processes. Such phenomena can be grouped together as ‘kinds’ according to their causal or constitutive properties or processes, allowing reliable predictions and explanatory power. The facts of the matter for social kinds are determined (in part) by social factors, rather than (only) physical, biological, or psychological factors. I draw on feminist and critical theory on race and gender to make my case that culture is grounded in systems of social relationship. Some feminist scholars characterize gender as the social meaning of sex (Haslanger 2012). I argue that culture is the social meaning of normative practices. If this is the case, animal culture need not be precluded. Animals need not have the concept ‘culture’ to have culture, anymore than humans need the concept ‘gender’ to have gender.
If researchers frame questions of animal culture with a focus on social relationality, then they will have a clearer path to recognizing it where it exists. As a case study, I will show how killer whales are cultural beings with socially constructed group-specific norms for communication, diet, foraging, social roles and interactions. Bodies of knowledge, experience and tradition are constructed, embedded and transmitted with meaning throughout these social normative cultural communities.
References
Galef, B.G. (2001). “Where’s the beef: evidence of culture, imitation, and teaching, in cetaceans?” Behavioral and brain sciences 24:335
Haslanger, Sally. (2012). Resisting reality: social construction and social critique. New York: Oxford University Press
Hill, Kim. (2009). “Animal “culture”?” In K.N. Laland and B.F Galef (Eds), The Question of Animal Culture (pp.269-287) Cambridge: Harvard University Press
Tomasello, Michael, Malinda Carpenter, Josep Call, Tanya Behne, and Henrike Moll. (2005) “Understanding and sharing intentions: The origins of cultural cognition”. Behavioral and brain sciences. 28: 675–735
Whitehead, H. and Rendell, L. (2015) The Cultural Lives of Whales and Dolphins, Chicago: The University of Chicago Press
Novelty and life: towards a possible explanation in Philosophy of Biology
ABSTRACT. Standard evolutionary theory is successful in explaining how existing structures, such as the bone structure of mammals, change over time. It is significantly more challenging to explain how new structures, like eyes and turtle shells, emerge in the first place. Such changes are called morphological evolutionary novelties. They involve the emergence of a new morphological feature that cannot be traced back to its expected phylogenetic origins. In this paper, my aim is to understand if a general conceptualization of novelty is possible. To do so I will use the narrower case of how novelty can be explained in biology, more specifically regarding evolution. This challenge has been taken up in a growing body of literature in biology that focuses on the concept of evolutionary novelty (Brigandt and Love 2012; Erwin 2015; Gayon 2004; Moczek 2011; Pigliucci 2008). The motivation of this paper is point towards an initial definition of novelty, in the context of a more fundamental problem: the tension between continuity and discontinuity, qualitative and quantitative change in evolution.
My view is that a historical philosophical approach combined with an interdisciplinary analysis in light of recent work in the field of biology and evolution will synthesize a contemporary understanding of novelty. There is a fundamentally unique aspect in biology that contrasts it to other sciences: its object of study is living beings, as opposed to inanimate matter. This points to the fact that traditionally causal explanations that apply in physics and chemistry might have to be modified or understood differently in biology.
I argue that the concept of novelty seems to be a corner case where this is more evident. It is the forward-looking nature of evolution that makes the question of novelty especially challenging and worth investigating, and in this paper I propose a conceptualization of this notion through a historical perspective in the first part, and a contemporary analysis in the second part. I will also propose objections to my claims and respond to such objections in light of a specific example, that of the development of wing-like structures in treehoppers.
ABSTRACT. Some authors argue that death does not seriously harm infants, severely intellectually disabled humans, and many sentient non-human animals because they lack certain cognitive relations to future selves. These authors hold quite plausibly that we should care about the extent to which we bear these cognitive relations to future selves, and conclude that death seriously harms an individual only if it disrupts these relations. In this paper I contend that these arguments are unsound, because the sense of ‘harm’ and ‘benefit’ relevant to our reasons of non-maleficence and beneficence to omit ending and preserve someone’s life concern what we should prefer out of care or compassion for her, rather than what she should prefer for herself. I argue that, out of care or compassion for an individual, we should prefer future experiences of goods as long as they are experienced by centers of phenomenal consciousness that are continuous with hers.
ABSTRACT. We should give kids a break. We should not blame or punish them as harshly as we do adults. But why? Many accounts point to reduced culpability, and especially reduced blameworthiness. Other accounts point to the conflicts between blame, punishment, and other goals we have for kids, especially education. In this paper, I argue that we owe kids a break because we owe them a distinctive kind of attention. This paper thus simultaneously advances an argument for the role for attention in blame and an argument for the treatment of young wrongdoers.
The doctrine of double effect (DDE) claims that an action that causes harm as a side effect of promoting a greater good may be justifiable. A “side effect” means that we do not need harm to occur in order to promote good. When, in order to save five people stuck on a track, I pull a lever that switches a runaway trolley to run over Smith, I did not need Smith’s death to save the five. When I push a fat man off a bridge to stop the trolley from hitting five people on the track, I need the fat man’s harm (his girth being fragile) to stop the trolley. The first case passes the DDE; the second does not. A contractarian is forbidden from this analysis, however. Instead of need and goodness, contractarians rely on status quo. To disrupt someone’s status quo absent proper consent is the meaning of immorality for contractarians. The fat man’s status quo is disrupted, obviously, so this counts as a bad act. But what of our pulling the lever in case one? Whether I need the lone man on the track to die or not, I have disrupted his status quo.
ABSTRACT. A full philosophical account of visual imagination must explain the factors which determine what a mental image represents. It must also specify the success conditions for imaginative representations. I argue that a “simulational” view of imagination, according to which imaginings simulate the states of affairs they represent, is well-suited to accomplishing these tasks. I first identify gaps in some existing views of imagination, which provides some takeaways to which a positive view of imagination must be sensitive. I then give my simulational account, after which I argue that it is sensitive to these takeaways.
ABSTRACT. How do we know about other minds on the basis of perception? Answers to this question are often phenomenologically inadequate, either positing implausibly rich perceptual access or circuitous inferences we have good reason to think subjects do not perform. Here I develop these critiques in detail and propose a novel account. Other minds are neither seen nor inferred. Instead, our perceptual access to other minds involve `ampliative perceptual judgments'. This view has been overlooked, in substantial part, because of a widely held, but implausible, view about the epistemology of perception—namely, that perception justifies belief by presenting content `as true'. Once we reject this assumption, a satisfying resolution to the problem of perceptual access to other minds is revealed. Attention to ampliative perceptual judgments, I suggest, holds substantial interest to the epistemology of perception more generally.
ABSTRACT. The causal exclusion problem suggests that mental properties of events are excluded as causes of bodily movements since bodily movements have distinct sufficient physical causes. The autonomist solution to the causal exclusion problem grants that mental properties of events are excluded as causes of bodily movements, but nevertheless cause mental effects. Several recent philosophers establish this autonomy result via similar models of causation (Pernu, 2016; Zhong, 2014). In this paper I argue that these models face the problem of Edward’s Dictum: subvening physical properties of effects determine mental properties of effects, thereby excluding mental causes from efficacy. I resolve this problem by appealing to the causal theory of action, according to which those mental properties of effects that are actions necessarily have mental causes.
Book Abstract: In the 21st century, the primary challenge for health care is chronic illness. To meet this challenge, we need to think anew about the role of the patient in health and health care. There have been widespread calls for patient-centered care, but this model of care does not question deeply enough the goals of health care, the nature of the clinical problem, and the definition of health itself. We must instead pursue patient-centered health, which is a health perceived and produced by patients. We should not only respect, but promote patient autonomy as an essential component of this health. Objective health measures cannot capture the burden of chronic illness, so we need to draw on the patient’s perspective to help define the clinical problem. We require a new definition of health as the capacity for meaningful action. It is recognized that patients play a central role in chronic illness care, but the concept of health behavior retards innovation. We seek not just an activated patient, but an autonomous patient who sets and pursues her own vital goals. To fully enlist patients, we must bridge the gap between impersonal disease processes and personal processes. This requires understanding how the roots of patient autonomy lie in the biological autonomy that allows organisms to carve their biological niche. It is time for us to recognize the patient as the primary customer for health care and the primary producer of health. Patient agency is both the primary means and primary end of health care.
Keywords: Agency, autonomy, engagement, empowerment, activation, chronic disease, illness, competence, patient-centered, capability
We are living in bad times. Wildfires rage out of control; the world’s first constitutional democracy is plagued by gerrymandering, voter suppression and incompetent electoral officials; vaccine skepticism is producing a measles comeback in the Global North; worldwide, nationalism and right-wing populism is on the rise; and the Intergovernmental Panel on Climate Change warns of the possibility of climate catastrophe as early as 2040. To borrow a trope from nerd culture, it feels as if we are occupants of the “darkest timeline”. This panel considers what, if anything, American pragmatism – it bears remembering, a school of thought that came of age between two world wars – offers to help us navigate the current age.
Nous vivons des temps difficiles. Les incendies sauvages semblent incontrôlables; la première démocratie constitutionnelle du monde est assaillie par le charcutage électoral, la suppression des électeurs, et les fonctionnaires électoraux incompétents ; le scepticisme à propos des vaccins produit une résurgence de la rougeole dans le « Nord Global » ; à l'échelle mondiale, le nationalisme et le populisme de droite augmentent ; et Le Groupe d’experts intergouvernemental sur l’évolution du climat averti de la possibilité d’une catastrophe climatique dès 2040. Avec des excuses à Leibniz, il semble que nous sommes les occupants du plus sombre des mondes possibles. Ce symposium considère ce que le pragmatisme américain – qui est, rappelons-le, une école de pensée qui est devenue majeure entre les deux guerres mondiales – peut offrir pour nous aider à naviguer dans le monde actuel.
ABSTRACT. Groups like the Heterodox Academy have recently adapted arguments about the epistemic benefits of demographic diversity to advance the notion that campuses should foster “viewpoint” diversity. I explore the respective epistemic roles of demographic and viewpoint diversity through the lens of Charles Sanders Peirce. For Peirce, we need a diverse community of inquirers precisely in order to converge on the final opinion. I argue on Peircean grounds that demographic and viewpoint diversity are not robustly comparable, and thus require distinct analyses. I conclude with some caveats.
La réussite de l’enquête en démocratie contemporaine : un plaidoyer pour (un retour vers) un pragmatisme multidisciplinaire
ABSTRACT. Dans son livre canonique Démocratie et éducation (Democracy and Education, 1916), John Dewey présente la démocratie et l’éducation comme les deux parties indissociables d’un même processus sociétal. Dès lors que l’on conçoit l’individu et la société de manière organique, la participation démocratique et l’éducation des citoyens sont tous les deux nécessaires à la symbiose sociétale. Ainsi, Dewey était sensible aux conditions de réussites psychologiques, éducatives, et pratiques du système démocratique. Pour lui, ces conditions de réussites étaient plus généralement celles associées à la réussite de l’enquête. D’un point de vue historique, l’une desforces incontestables du pragmatisme classique — de Peirce à Dewey — consiste en son approche foncièrement multidisciplinaire. Or, la philosophie pragmatiste contemporaine — sans doute influencée par la montée en flèche de la philosophie analytique à partir des années 1930 — semble avoir quelque peu délaissé cette assise multidisciplinaire.
Cette conférence vise à faire un constat critique important : le pragmatisme pourra offrir une analyse adéquate des difficultés et des enjeux de la démocratie contemporaine qu’au prix d’un retour vers cette multidisciplinarité originelle. Dans cette présentation, nous décrirons certains de ces enjeux (influences des réseaux sociaux, biais cognitifs, etc.) et monterons que le pragmatisme classique était l’une des théories les mieux outillées pour expliquer (et potentiellement résoudre) ces défis. Nous terminerons cette conférence en offrant certaines balises pour un retour à ce type de pragmatisme.
Dark Days & The Universal Continuum: Pragmatist Cosmopolitanism for the 21st Century
ABSTRACT. One of the enduring themes of pragmatism is that truth is group work. Knowing better requires knowing together. But in our contemporary world, we stand in a strange moment with respect to knowledge communities: we have both incredible access to information about distant places and peoples, and the formation of what Elizabeth Anderson has called ‘epistemic bubbles’, insular information silos within which agreement is prized and inquiry curtailed. For advocates of free and open inquiry, these may seem like dark days.
The affinity between pragmatism and cosmopolitanism has been richly developed throughout its history. Peirce thought logical thinking required us to regard ourselves as “welded into a universal continuum”; James called us to recognize ourselves as an ethical republic; Addams and Mead developed social ethics for the world of the early 20th century. Contemporary pragmatists – including Anderson, Aboulafia and many others – have continued to develop the idea of a universal continuum and to argue for the epistemic, moral, and practical necessity of recognizing ourselves as placed within that continuum.
In this session, I have two aims: to develop the epistemic basis for broadening our knowledge communities, and to consider what habits of mind and action this deeply pragmatist form of cosmopolitanism can concretely recommend in our contemporary context.
The Flight From Truth and Lane’s Rehabilitation of Correspondence
ABSTRACT. Anyone paying attention to recent political discourse in the West knows that attitudes towards the concepts of truth and reality have taken a markedly skeptical turn. The same trend is visible, I believe, in the historical trajectory of philosophical pragmatism. Thus, I propose to examine that trajectory in search of clues about what has gone wrong and whether anything can be done to rehabilitate these vital notions. In particular, I make an attempt to rehabilitate the notion that objective truths - truths that correspond to a shared reality - exist and are knowable. My attempt is inspired by Bob Lane’s new book Peirce on Realism and Idealism.
The Arbitrament of the Big Battalions”: Russell’s Warning and Peirce’s Pragmatism
ABSTRACT. Russell included James’s pragmatism in “the ancestry of fascism,” arguing that it inevitably devolves into an “appeal to force.” I argue that, unlike James’s pragmatic theory of truth, Peirce’s pragmatic clarification of the idea of reality enables an appropriate moral response to fascism and its abuse of truth.
The Unfettering of Human Rights Justification and the Right to Subsistence: Some Comments on the Work of Onora O'Neill
ABSTRACT. The paper seeks to engage the important theoretical work of Onora O’Neill by expanding the scope of justificatory reasons available for grounding human rights in general, and in particular with regard to the human right to subsistence. I argue that O’Neill unduly fetters the justificatory strategies available to her for grounding human rights by endorsing the “mirroring view” of human rights, recently defined by Alan Buchanon as the view that “the standard adequate justification of an international legal right requires showing that there is a corresponding, antecedently existing moral right of the same scope or content.”
Along with Buchanon I contend that the common tendency of philosophers and human rights theorists to endorse the “mirroring view” tends to obfuscate an important distinction between moral human rights and legal human rights. However, I expand on the scope of Buchanon’s justificatory arguments for human rights (in his case with regard to the international legal human rights system), by arguing that there are many other justifications for human rights, such as, for example, economic arguments and arguments based on public health, education, and public safety.
When one looks at the language used in the human rights instruments both domestically and internationally, it is not surprising that one is led to believe that these rights are universal and pre-institutional. However, while it may have some bearing on their interpretation, the language of the drafters is not determinative of the concepts therein, and in fact the mirroring view can lead to practical problems for the recognition and implementation of human rights both domestically and internationally, as the jurisdiction and content of human rights continues to be contested by focusing solely on abstract ethical arguments.
The right to subsistence is a helpful example to focus on in discussing the conceptual distinction between moral and legal human rights as it is a positive right whose coherence as a universal pre-institutional human right is often questioned. And yet it has many justificatory arguments to recommend it ranging from it being a foundation for the enjoyment of other rights, to arguments from moral obligation, to consequentialist arguments such as those relating to economic benefits, public health and education. While acknowledging that most justificatory arguments for human rights will be based on morality, the paper argues that there are often additional broader justificatory strategies available for any human right, and also sketches examples of possible human rights that could conceivably be based solely on non-moral arguments (such as a modified right to subsistence based on economic arguments regarding the money multiplier).
With regard to O’Neill’s important arguments regarding failed states and the agents of justice, I contend that these arguments can also apply to obdurate states (such as Canada with regard to the right to subsistence) and that broader justificatory strategies can promote greater direction and consistency with regard to the recognition and implementation of human rights in obdurate states, and might also recommend agents of justice available to assist failed states.
ABSTRACT. According to a pair of recent papers, the fact that people live lives of unequal length poses serious problems for Rawlsian justice. The problems stem from the fact that anyone who reaches old age will necessarily have lived through younger ages, while not everyone who is young will also live to be old. This means that any resources set aside for the care of the aged creates a lifetime inequality between them and those who die young: longer-lived people just as such will enjoy a larger lifetime share of primary goods compared to the shorter-lived. It is difficult to see how this inequality could be justified on Rawlsian grounds, because it is difficult to see how such inequality could be necessary to benefit the least advantaged.
My aim in this paper is to sketch a new, broadly Rawlsian response to these concerns. This response begins from the idea that, for Rawls, society is to be conceived as a cooperative venture for mutual advantage. With this in mind, it is important to see that social institutions which provide for the aged are not just a matter of zero-sum redistribution from young to old; in fact, these institutions facilitate positive-sum social cooperation. This is because what these institutions provide is really a form of insurance against the risk of living too long.
Uncertainty about how long one will live is a key challenge in planning for one’s own senescence. Fortunately, humans have developed cooperative strategies for coping with that uncertainty. These strategies involve the pooling of risk. Pooling risk makes possible a reduction in uncertainty because the average longevity of thousands or millions of individuals is highly predictable in a way that a single person's longevity is not. By pooling one’s retirement savings with others, a person can save as if she will live a life of average length, while being confident of a stream of income that will last for however long she lives. Some people will lead longer lives and some shorter ones, but the law of large numbers implies (very roughly) that the over-saving of those who die young will be sufficient to offset the under-saving of those who live longer. This is beneficial because people tend to be risk-averse.
This is essentially how a life annuity or a pension fund works. The fact that a pension adds to ex post inequality is, as they say, a feature, not a bug; the ex post inequality is but a means to the ex ante elimination of uncertainty. Indeed, to insist on ex post equality would amount to a form of levelling down. It would mean achieving equality by worsening the situation of those who prefer ex ante security over ex post equality – which is to say, in my experience, most people. The inequality between those who die young and those who live long is beneficial to the least advantaged insofar as it provides them with a valuable form of protection against risk.
What can space babies tell us about the value of genetic ties?
ABSTRACT. IVG or in vitro gametogenesis is the latest reproductive technology that can allow us to extract stem cells from human tissue (e.g., skin cells), convert them to gamete cells, and then use them to produce an embryo (Carter-Walshaw, 2019).This technology can be used for selecting out ‘undesirable’ genetic traits, treat certain fertility challenges or help same-sex couples conceive a genetically-related child. More controversially, it can also be used to extract stems cells from an already existing embryo, and using its tissue create gamete cells that will result in another embryo. When applied dozens or hundreds of times, this technology would essentially make it possible to create a series of offspring frozen at the same developmental stage (Appleby, 2018).
My paper is concerned with a thought-experiment in which this technology could be used to create frozen embryos that would be shipped to planets hundreds of years away by spaceship, gestated in artificial wombs, and used to colonize new worlds. In so doing, we are confronted with pressing questions about what kinds of embryos to send and what the relations should be between the embryos. Can they be unrelated to one another, or is there some obligation to send genetically-related individuals? Should they be gestated in any particular order? Should they be expected to be raised together or by one another?
The answers to these questions can provide us with insights into current family-making values and decisions, as well into the acceptability of anonymous gamete donation—a practice where individuals are knowingly created without a genetic link to at least one of their parents. A well known rejection of gamete donation has been espoused by J. David Velleman, who has argued that gamete donation is wrong because the offspring of these practices lack the tools they need for adequate “self knowledge and identity formation” that is usually gained via an “acquaintance with people who are like them in virtue of being their biological relatives” (2005: 365). Therefore, he concludes, “children should be raised by their biological parents” (2005: fn362).
Using this thought experiment, I undermine the intuition that there is something uniquely special about a genetic parent-child relationship and I raise serious doubts about the importance of genetic relationships at all. Following this, I investigate whether the failure of genetic relationships to be ‘objectively’ valuable entails that the valuing of genetic relationships in our family-making practices should be discouraged, and what that might look like. In other words, I ask whether we should ensure access to socially-valuable goods, even if they are of dubious independant merit. Finally, I apply my conclusions to the ongoing debate on the permissibility of anonymous gamete donation.
Justifying Aggression, Hostility and Emotional Outbreaks: The Defeasibility of the Duty to Argue Cooperatively
ABSTRACT. The argument from fairness and the argument from justice are traditionally used to support the duty to obey the law. According to the argument from justice, we have a duty to enter and support just institutions and practices capable of regulating our living together (see Waldron, 1993). According to the argument from fairness, we have a duty to do our fair share in a mutually beneficial, cooperative enterprise that we have entered willingly or benefit from (see Rawls, 2001). In argumentation theory, these arguments can be used to support a duty to react to disagreements that we cannot ignore by choosing a cooperative resolution process like argumentation. In addition, once we have entered the argument, we have a duty to adhere to rules capable of making the argument reasons-responsive. This is so because argumentation is a comparatively effective method for arriving at justified conclusions and fair decisions and therefore a just practice for the resolution of disagreements. Additionally, argumentation is a cooperative enterprise and - when it is habitually chosen - it is mutually advantageous (see, e.g. Aikin, 2011).
However, political philosophers preface the arguments from justice and fairness by stipulating that the political community in question is just, and that the institutions creating the laws are fair, unbiased and effective (see, e.g. Dworkin, 2013; Rawls, 2001; Waldron, 1993). Generally, modern philosophers grant that disobedience can be legitimate under conditions of injustice. The duty to obey the law can be justified in ideal theory, but because the world is not ideal, this duty is defeasible. Recently, Candice Delmas has shown that under conditions of injustice, both arguments can be turned around and used to justify disobedience (Delmas, 2018).
Argumentation theory has its own, self-consciously ideal theory. Perhaps the most obvious example is the pragma-dialectical model of the critical discussion (van Eemeren & Grootendorst, 2004). Its rules are meant to guide a disagreement, through argumentation, to a mutually acceptable resolution. Abiding by these rules requires cooperativeness in the form of abstaining from opportunities for winning that would disrupt a reasons-responsive argumentative process. If the pragma-dialectical model is a valid theory of argumentation, then the arguments from justice and fairness support that each party in a disagreement has a duty to enter a critical discussion, and to abide by its rules. However, this is only true under the pre-supposed ideal conditions, which include that the social status, education, individual character etc. of the arguers do not unequally influence the argument. However, recent work in feminist argumentation theory and in applications of Miranda Fricker’s theory of epistemic injustice to argumentation have documented that injustice arising from such factors is common in argument (Bondy, 2010; Burrow, 2010; Fricker, 2007; Hundleby, 2013). This suggests that the duty to remain in an argument and to stick to the rules of argumentation is defeasible. Under conditions of injustice, it becomes legitimate to disrupt or undermine the argument through otherwise unacceptable, because uncooperative, behavior like emotional outbreaks, psychological violence, coercion, manipulation and fallacious arguing.
ABSTRACT. This paper examines the recent discovery of the Higgs boson at the Large Hadron Collider (LHC) and the reasoning that has led to the high confidence in a particular hypothesis about the Higgs boson, the Standard Model (SM) Higgs hypothesis. While the confidence in the SM Higgs is very high, a simple Bayesian analysis falls short of establishing this. What has been confirmed by the LHC data is the Higgs mechanism of spontaneous electroweak symmetry breaking, but not any particular implementation of this mechanism. The paper aims to address the question of how it is that a particular formulation of the Higgs hypothesis is taken to be so highly confirmed.
ABSTRACT. Several formal models of signalling conventions have been proposed to explain how and under what circumstances compositional signalling might evolve, thus bridging the explanatory gap between the evolution of simple communication and natural language. I suggest that these models fail to give a plausible account of the evolution of compositionality because (1) they apparently take linguistic compositionality as their target phenomenon, and (2) they are insensitive to role asymmetries inherent to the signalling game. I further suggest that, rather than asking how signals might come to be compositional, we must clarify what it would mean for signals to be compositional to begin with.
Tools of Analysis in Cognitive Neuroscience: Developed as Techniques, Used as Templates
ABSTRACT. Human neuroimaging research consists in the search for neural explanations of human cognitive capacities. Neural and cognitive phenomena cannot be directly measured in humans. As a result, neuroscientists use behaviour to investigate cognitive processes, and changes in blood oxygenation levels as an indicator of neural processes. The reliance on indirect measures combined with the causal complexity of cognitive and biological systems results in a persistent epistemic challenge. In practice, these challenges are overcome through data analysis. The development and use of tools for transforming data are central to discovery and confirmation in cognitive neuroscience. For instance, the uptake of multi-voxel pattern analysis methods made it possible to decode information from the brain (e.g., Tong and Pratte 2012; Davis and Poldrack 2015), and the recent proliferation of network-based accounts of human cognitive function driven by the refinement of techniques for conducting network analyses of neuroimaging data (e.g., Pessoa 2014; Yeo et al 2011). In these cases analytic methods were developed to facilitate the advancement of a specific theory, and then they took on a life of their own as they become encoded into software packages, becoming tools for data interpretation. Data analysis tools are used to isolate data patterns that are interpretable in terms of the cognitive and neural processes underlying imaging data (Wright 2018). The aim of this essay is to examine what makes a pattern in data interpretable, and to articulate what factors influence the meaning that is ascribed to such a pattern. To do so I contrast the epistemic relationships between data analysis tools, their developers, and their users.
I argue that tool developers treat analysis methods as techniques. While they are tuned to isolate patterns of a particular kind, they can be adapted to fulfill other purposes. Developers recognize that analysis tools provide limited epistemic access to phenomena. Their conceptualization of the target phenomena, and intended use of the tool, directs decisions made during the encoding of an analytic method or procedure into a software package. Tool users, on the other hand, tend to treat analysis methods as templates. They are not treated as flexible strategies for data manipulation, but as a means to isolate patterns in data with prescribed meanings. The form of the template is established by high profile publications that rely on the method, and the interface of the analysis tools through which the user interacts with it. I propose that this template-like use of analysis tools transfers epistemic authority from the users to the analysis tools.
I conclude by considering the positive epistemic aspects of analysis tools, including the distribution and transfer of understanding and expertise amongst members of a community, and the negative epistemic aspects of these tools, including the impact of tool uptake on methodological diversity within the community (e.g., Lewontin 1991). In doing so I propose that epistemically productive tools have two central properties: (1) they are flexible, inspiring and allowing for multiple applications, and (2) they are not easy-to-use.
ABSTRACT. Dennett (2017) offers perhaps his clearest reasoning for a thesis he has long defended, namely that consciousness is a user-illusion. "Our access to our own thinking, and especially to the causation and dynamics of its subpersonal parts, is really no better than our access to our digestive processes" (Dennett 2017, 346). Dennett uses the thesis to undermine a privileged first-person point of view, paving the way for a science of consciousness and ultimately defanging the hard problem. While I do not address Dennett's loftier goals in arguing for a user-illusion, I argue that emerging results about cognitive architecture from studying the connectome show that consciousness must be a user-illusion.
Graph-theoretic analysis of neural connectivity in different species and at multiple scales reveals that the brain has a small-world architecture; "…we can provisionally conclude that enough evidence has amassed to judge that small-worldness is a near universal property of nervous systems" (Bassett and Bullmore 2016, p. 2). Small-world graphs have the clustering properties of regular graphs or lattices in that neighbouring nodes are mostly connected to each other, and the path length of random graphs by having some long-range connections between nodes. The result for neural architecture is specialized local processing units with connections between them to coordinate and integrate their activity (Bullmore and Sporns 2009, Sporns 2011, Sporns 2012, Bullmore and Sporns 2012, Bassett and Bullmore 2016). Note that the specialized units need not have the characteristic features of Fodorian modules (Fodor 1983), and they may perform multiple functions depending on how they are recruited, consistent with the neural reuse theory (Anderson 2014).
Converging results in studying the neural correlates of consciousness suggest that conscious activity is associated with activation of long-range connections (Dehaene and Naccache 2001, Tononi 2008). The emerging picture is that we are conscious of content sent between specialized processing units. But this content cannot be the information being processed in those units precisely because those units are specialized to process that information. Other systems are not designed to process that information, as is most clear in the cases of sensory systems and language processing. Specialized information processing systems reporting their current state to other systems, possibly via a global workspace (Baars 1988), to facilitate integrated activity must output their activity in a format interpretable by the other systems, which necessarily leaves out detailed information. Thus, our conscious awareness of our mental processes must be much like our experience of hunger relative to the details of the state of our digestive system; so our sense that we know our mental states and processes is illusory. I conclude with a possible objection that other networks can process detailed information. In ferrets, for example, auditory cortex can process visual information when fed surgically rewired inputs (Roe et. al. 1992), contrary to my key premise. I respond that the new processing requires significant retraining of the network, hence the example does not show the original network could process information for which it was not specialized.
ABSTRACT. Standard accounts of intentional action describe its control as consisting of both an
intentional component, characterized by guidance via an agent’s personal-level psychological states and processes (e.g., intentions and perceptual states), and an automatic component, implemented by the sub-personal states and processes of the motor system (e.g., motor commands and sensory feedback). Call the former agentive control, and the latter motoric control. On such dichotomous accounts, the distinguishing mark of agentive control is often thought to be its flexibility (e.g., Shepherd 2014). Recently, however, complications have arisen for this intuitive picture. For example, Fridland (2017), has argued that in the case of skilled action, motoric control is “intelligent all the way down”, and should not be construed as ballistic, rigid, or invariant, but rather as exhibiting its own form of flexibility. If this is right, then the standard way of drawing the distinction between agentive and motoric control is rendered unacceptably murky, and perhaps even at risk of collapsing altogether. As such, a proper account of the notion of flexibility which lies at the heart of the standard account is required.
To date, though, no systematic account of flexibility and the precise role it plays in distinguishing between intentionally guided action and automatic behaviour has been provided. In this talk, we will aim to provide such an account. Our overarching claim is that the notion of flexibility as it pertains to action control is a multifaceted one. Thus, in order to understand the difference between agentive and motor control, one must understand the different forms of flexibility that underlie them.
Our talk will be structured as follows: First, we provide a brief overview of recent debates concerning the flexible nature of automatic motor control. Then, we summarize Shepherd’s (2014) account of intentional control in which the concept of flexibility plays a central role. On this view, flexibility consists in an agent’s capacity to reliably bring their behaviour in line with their intention across a “sufficiently large and well-selected” range of counterfactual scenarios. We argue that, while this serves as a useful general framework for analyzing the notion of flexibility, it fails to provide us with a deeper understanding of the specific capacities that are revealed by consideration of the relevant counterfactual scenarios. Next, we build on this picture by offering an account of flexibility that allows that it can come in degrees, and breaks it down into three broad dimensions: (i) input flexibility, which concerns the range of sensory inputs that a system has the capacity to be sensitive to, (ii) output flexibility, which concerns the range of outputs that a system has the capacity to produce, and (iii) control flexibility, which concerns the range of internal states and processes that can mediate between inputs and outputs. We will analyze and illustrate each of these three dimensions. Finally, we show how our account allows for a more nuanced understanding of the distinction between agentive and motoric control, and briefly explore implications for neighboring debates pertaining to consciousness, skill, and modularity.
The Evolution of the Human Mind: Two Recent Methods of Examination
ABSTRACT. One of the goals of cognitive science includes understanding the workings of human cognition and one strategy for understanding the workings of human cognition is to examine the evolution of the human mind. While evolutionary psychology continues to dominate the field (Barrett, 2015), more recent evidence from cognitive science provides a fresh approach for understanding the evolution of the mind. In this paper, I outline two methods for understanding the evolution of the human mind. Both methods adopt a theory called neural reuse and align with evolution and natural selection in different ways.
According to the theory of neural reuse, the structure of the mind is instantiated by brain regions that can be reused for multiple cognitive uses (Dehaene, 2007; Anderson, 2011, 2014). Two points for understanding reuse are the following. First, as the brain develops more complex neural networks, the brain realizes more complex functions. For instance, language requires an extensive neural network, while more primitive abilities such as perception require a less extensive neural network (Anderson, 2014). The more extensive the neural network, the more potential for accomplishing complex tasks. Second, individual brain regions may contribute to various operations because of the features of those brain regions and the circumstances affecting those brain regions (Anderson, 2014). A single brain region may contribute to a variety of cognitive abilities by reusing its resources with different regional partners. Thus, neural reuse is the view that (i) the brain is a dynamic network that enables the brain to accomplish complex tasks; and (ii), brain regions within the network contribute to many diverse functions in a variety of circumstances.
Neural reuse may align with evolution in at least two ways. The first way that reuse aligns with evolution includes describing a brain selected for reuse. This first way suggests that evolution selected for a brain for reuse. The entire reusable brain is an adaptation forged by natural selection. The second way that reuse aligns with evolution includes describing a brain with the capacity for reuse. According to this second way, natural selection may have contributed to the development of a brain exhibiting reuse but then other mechanisms of evolution, such as by-products and cultural influence, may have been operative in the process.
The first way describes natural selection as a sufficient condition for explaining the evolution of the human mind; the second way describes natural selection as a necessary condition for the evolution of the human mind. My aim is to introduce these two methods and then offer some brief objections and responses.
Aristotle's definition of time: an old instance of the timeless 'continuous' vs. 'discrete' paradox
ABSTRACT. Aristotle’s definition of time – the number of movement with respect to the before and after (Phys., IV, 11, 219b1-2)– has been a subject of debate among commentators since the Antiquity. One of the main problems raised by this definition is why time is said to be a number, a discrete quantity, while Aristotle thinks that it is a continuous quantity. Attempting to resolve this paradox, which is just an old instance of the timeless ‘continuous’ vs. ‘discrete’ problem, scholars have generally developed interpretations which amount to say that by defining time as a number, Aristotle simply means that it is the measure of movement. They then provide different reasons to explain why Aristotle uses ‘number’ rather than ‘measure’. Annas (1975), whose view is still influential, proposes that Aristotle’s reason to define time as a number is to oppose Plato’s view – that time and numbers are ideal entities – by making time dependent upon the measurement of things in motion. Roark (2011), who thinks that time is for Aristotle a hylomorphic compound of a form (perception) determining a matter (motion) with respect to quantity, maintains that defining time as a number is a way to implicate the formal determinacy that perception imposes on motion. Sentesy (2017) thinks that time is defined as a number because it is essentially a defined length of movement perceived by the mind, abstracted as a unit and used to count other movements which it measures. Coope (2005), the only recent commentator to hold that ‘number’ does not have the meaning of ‘measure’ in Aristotle’s definition, argues that it should not even be construed as a quantity: time would be the order of change with respect to the before and after, and would be defined as a number because it is obtained by counting and introducing divisions in it.
Two texts generally ignored by this literature – Metaphysics V, 13 and Categories 6 – in which Aristotle expounds his conception of quantity, number and measure, suggest a different solution to the problem. By commenting, in light of such a background, Aristotle’s reasoning which leads to his definition of time, I would like to argue that the fundamental reason why time is defined as a number is that movement, which time is the quantity of, has parts that exist in succession. Movement can exist as a quantity, a whole divisible in parts, only in the mind. This happens when the mind distinguishes movement, introducing actual divisions in it. Movement then forms a plurality of parts which, insofar as these can be counted, constitute a number for Aristotle.
Time is thus defined as a number because it is perceived as a plurality of successive parts in motion. However, this number defines a continuous quantity for the parts of motion are not divided in reality. I will also argue that as the number of motion, time measures motion; however, being a measure is a property of time distinct from its nature, which cannot be used to define it.
Thomas Bradwardine and the Philosophy of Time: 14th-century Augustinian, or anti-pelagian reactionary?
ABSTRACT. When considering the contribution of Thomas Bradwardine (early-14th-century Archbishop of Canterbury and one of the famed “Merton Calculators”) to the philosophy of time, there is a strong temptation to view him, in keeping with much of the extant literature, primarily as a philosopher and theologian in the line of Augustine. From this perspective, Bradwardine is seen primarily as a conduit of Augustinian philosophy in the next millennium, and as a forerunner to the Augustinian Reformers of the centuries immediately following. This view of Bradwardine was promoted most prominently and obviously by Heiko Oberman in his 1957 monograph, Archbishop Thomas Bradwardine: A Fourteenth Century Augustinian. More specifically relevant to the topic of this paper, Edith Wilks Dolnikowski, in her 1995 monograph, Thomas Bradwardine: A View of Time and a Vision of Eternity in Fourteenth-Century Thought, seeks to place Bradwardine’s philosophy of time in the Classical lineage of Euclid, Aristotle, Augustine, and Boethius. Both of these studies primarily reference Bradwardine’s most famous and developed theological text, written relatively late in his life, De causa Dei contra Pelagium.
I wish, rather, to highlight the primarily reactionary nature of Bradwardine’s philosophical output: for while it is certainly true that the theology and philosophy of Augustine played a significant role in the development and shape of Bradwardine’s own thought, I contend that it is more appropriate to view Bradwardine’s philosophy of time as a direct reaction to Ockham and his perceived pelagianism, than as, properly speaking, a positive development of Augustinian thought. Secondly, I wish to show that Bradwardine’s apparent Augustinianism comes to him primarily indirectly, through another Archbishop of Canterbury, Anselm. The influence of Anselm is evident in De causa Dei, but is even more obvious in earlier works, such as the reportatio called De futuris contingentibus, where favourable references to the works of Anselm are so ubiquitous as to be positively striking. In light of this and other evidence, I contend that Bradwardine may be more accurately characterized as “Anselmian,” than as “Augustinian.” Finally, I argue that a complete understanding of Bradwardine as a philosopher must come through more careful attention to his entire corpus, including his earlier works, which have only recently begun gaining scholarly attention.
Select bibliography:
Dolnikowski, Edith Wilks. Thomas Bradwardine: A View of Time and a Vision of Eternity in Fourteenth-Century Thought. Lieden: Brill, 1995.
Oberman, Heiko A. Archbishop Thomas Bradwardine, Fourteenth Century Augustinian: A study of his theology in its historical context. Utrecht: Kemink & Zoon, 1957.
Thomas Bradwardine. De causa Dei contra Pelagium et de virtute causarum. Ed. Henry Seville. London: 1618.
________. De incipit et desinit. Ed. L.O. Nielson. Cahiers de L'Institut du Moyen- ge Grec et Latin vol. 42 (1982). 47 – 83.
William Ockham. Predestination, God's Foreknowledge, and Future Contingents, 2nd ed. Trans. and ed. M. McCord Adams and N. Kretzmann. Indianapolis: Hackett, 1983.
ABSTRACT. The debate concerning whether Aristotle’s position on the intellect/thought (νοῦς) is best described as dualist, functionalist, or physicalist, has largely focused on his comments in the latter half of Physics as well as De Anima. In this paper, I argue that Aristotle’s comments in a previously unrecognized place commit him to supervenience physicalism with respect to thought (νοῦς): his development of his treatise on time, in Physics.IV.10-14. This is because, I argue, without a commitment to supervenience physicalism with respect to thought (νοῦς), Aristotle’s development of his treatise on time faces several problems.