VMST 2018: 8TH ANNUAL VALUES IN MEDICINE, SCIENCE, AND TECHNOLOGY CONFERENCE 2018
PROGRAM FOR SUNDAY, MAY 20TH
Days:
previous day
all days

View: session overviewtalk overview

09:30-11:00 Session 19: Values in Science: Roles, Constraints, Criteria
Chair:
Matthew J. Brown (The University of Texas at Dallas, United States)
09:30
Andrew Schroeder (Princeton University/Claremont McKenna College, United States)
Values in Science, Public Trust, and Transparency

ABSTRACT. There is a growing (and I think correct) consensus among philosophers of science that core parts of the scientific process can’t be value-free or “objective” in the way most people think. We have empirical evidence, however, showing that when members of the public believe that scientific results depend on or were influenced by a scientist’s values, that undermines the trust they have in those results. I argue that this response is rationally justified: using case studies from epidemiology and climate science, I show how key conclusions of important scientific studies can crucially depend on underlying value judgments. Unfortunately, there is often no way for non-specialists to determine whether or not this has been the case. And, importantly, in such cases there are few generally-accepted principles of scientific ethics which significantly constrain scientists’ value judgments. Thus, rejecting the value-free ideal for science both *does* and in many important cases *should* undermine public trust in science.

What is the solution? Several authors have proposed transparency: scientists should clearly identify the value judgments they make in the course of their work. I argue that this is insufficient. On the descriptive side, we have empirical evidence showing that reporting values does not increase public trust in science (in fact, it decreases it). And, given the highly complex and technical impact of many value judgments on scientific results, I argue that transparency also probably should not increase public trust in science, at least in many important cases.

I conclude by suggesting that the problem was never that scientists’ values were hidden. Instead, it was that they are (relatively) unconstrained. I therefore propose adopting (a more nuanced version of) the following as a principle of scientific ethics: when scientists must make value judgments in the course of their research, they should (in many cases) not rely on their own values; they should instead appeal to democratic values — roughly, the values of the public or its representatives. This, I argue, rationally should increase public trust in science. We don’t yet have any evidence about whether it will increase public trust in science, but I think there are reasons for optimism.

09:50
Samuel Hall (University of Notre Dame, United States)
The Function of Cognitive and Conative Values in Science
SPEAKER: Samuel Hall

ABSTRACT. The considerable literature on values in science has come to rely upon an array of disjointed terminological distinctions (e.g. epistemic/non-epistemic; cognitive/social; direct/indirect), the inconsistent application of which has contributed to hindering consensus on the proper place for values in scientific inquiry. Here, I attempt to organize pre-existing values talk under a more consistent framework, reassessing both their nature and role, legitimate or illegitimate. I first problematize the standard epistemic/non-epistemic distinction, arguing that it risks conflating two senses of 'epistemic'. My alternative view hangs the discussion on a simplified yet robust distinction between cognitive and conative values, highlighting an axiological thread running through the literature. I distinguish between (i) epistemic- (or truth-indicative) and pragmatic- (or truth-disclosive) cognitive values, having a mind-to-world direction of fit and instantiated as characteristics of theories, and (ii) conative values, concerning intentions or goals regarding how we want the world to be. The operative set of cognitive values is a function of the particular (epistemic) conative value guiding scientific inquiry. Given this functional approach, I explore the underlying epistemic issues motivating an appeal to values. This yields support for the view that values concerning non-epistemic matters, be they cognitive or conative, are rationally permissible only if they are compatible with the epistemic goal of scientific inquiry--either expanding fruitful research or limiting methodologies in ways that do not hinder the pursuit of knowledge. Specifically, cognitive values serve as reasons to commit to a given theory, with only epistemic-cognitive values capable of closing the logical gap in evidential relations; whereas conative values dictate the norms of inquiry, conditioning methodological standards of research and evaluative standards of evidential support. This value-directed view recognizes the ineliminable role of values throughout science while respecting the motivating concern behind the value-free ideal, namely, avoidance of self-confirming belief in any particular scientific theory.

11:30-12:30 Session 20: Health and Medicine II
Chair:
Frederick Grinnell (UT Southwestern Medical Center, United States)
11:30
Jake Earl (National Institutes of Health, United States)
Innovative Practice, Clinical Research, and the Ethical Advancement of Medicine
SPEAKER: Jake Earl

ABSTRACT. U.S. ethics guidelines and regulations for clinical research on human subjects do not apply to novel or experimental medical interventions that are given primarily to aid a patient, rather than to create generalizable knowledge. Despite this formal exclusion of what we might call “innovative practice” from the ambit of the research ethics framework, such interventions raise significant ethical concerns. Commentators on this subject have focused primarily on the interpersonal ethics of a practitioner offering such a nonstandard clinical intervention to a patient. Less attention has been given to the fact that innovative practice frequently serves as a pathway for the development and proliferation of new therapeutic, preventive, and diagnostic interventions, one that lacks the scientific and ethical controls of clinical research. In this paper, I explore the ethical tensions between the innovative practice and clinical research pathways to improving the standard of care in medicine, including concerns about current patients’ autonomy rights, future patients’ well-being, practitioners’ professional integrity, and costs to society at large. Policy approaches to address these tensions can be helpfully categorized based on the kind and degree of restrictions they place on innovative practice (as the clinical research pathway is an ethically necessary component of the medical enterprise), and I consider the relative ethical merits of both more and less restrictive approaches. Ultimately, I argue that in the absence of additional empirical evidence about the societal risks posed by the innovative practice pathway, considerations of current patients’ autonomy rights and practitioners’ professional integrity ultimately support a less restrictive regulatory approach, specifically one that would permit the continuation of innovative practice where requiring the practitioner to conduct formal clinical research would be unreasonably burdensome. Provided that they are properly coordinated and regulated, innovative practice and clinical research are both necessary, viable pathways to the ethical advancement of medicine.

11:50
Laura Cupples (University of South Carolina, United States)
Epistemic Justice, Health State Valuations, and the Quality Adjusted Life Year
SPEAKER: Laura Cupples

ABSTRACT. The quality adjusted life year (QALY) is a generic measure of disease burden used in health economic analysis. By estimating the value associated with a given health state, and also taking into account how many years that patient is projected to enjoy that state, policy makers attempt to gauge the relative worth of the associated intervention. It is argued that because resources are limited, society should try to maximize the good they are able to do for patients with available health care dollars. Yet who, from an epistemic standpoint, should provide values for the health states in question? At present, values are solicited from a representative sample of the general public rather than directly from disabled and chronically ill persons. Policy makers argue disabled and chronically ill people are not credible reporters because they exhibit an adaptive preference for their own health states. I contend that this argument does not constitute an adequate reason for soliciting values primarily from healthy and able-bodied individuals. Failing to solicit values from disabled and chronically ill individuals is both epistemically and ethically unsound practice. I argue that the disabled and chronically ill are in a privileged epistemic position, relative to the general public, when it comes to evaluating disabled and ill health states. I further argue that not soliciting their valuations of these health states amounts to epistemic injustice, and can lead to distributive injustice. Not only do I call on policy makers to give the disabled and chronically ill a central place in valuing the health states used in QALY calculations, I also encourage them to reframe the problem of priority setting for resource allocation in a way that does not systematically devalue the health states of the disabled and chronically ill.

13:30-15:00 Session 21: Empirical Approaches to Values in Science
Chair:
Dan Hicks (University of California, Davis, United States)
13:30
Jessey Wright (Stanford University, United States)
Making Images and Interpreting Data in Neuroimaging Research
SPEAKER: Jessey Wright

ABSTRACT. Techniques for analyzing, handling, and sharing data, have been central to progress and central to critiques of neuroimaging technology. The low statistical power, and vast array of analysis methods researchers can choose from have been classified as a state of crisis — “… a ‘perfect storm’ of irreproducible results” (Poldrack et al 2017). Furthermore, common interpretations of neuroimaging data may be undermined by the indirect nature of the data — which reflects blood flow, not neural activity — and the assumptions implicit in the complex data manipulations used to overcome it (van Orden and Paap 1997; Mole and Klein 2010; Aktunc 2014). On the other hand, analysis techniques are necessary for neuroimaging research to function. Recent philosophical work suggests that the role of data analysis may be more nuanced than critics allow. Data analysis techniques play a variety of roles in the experimental process, where critics often only treat them as tools for error correction, or statistical inference (Roskies 2010; Wright 2017).

In this paper, I draw upon my experience collaborating with neuroscientists and current situation as a resident in a neuroscience lab, to argue for a more nuanced account of the role data manipulations play in neuroscientific experiments. I propose that they are used to isolate interpretable patterns in the data. By looking at how multi-voxel pattern analysis techniques have enabled research into the information carried by neural activity, I show that analysis techniques are chosen because they are seen as able to provide positive evidence for hypotheses of interest. As a consequence, how data analysis techniques are understood, discussed and promoted by the community influences the interpretation of their results. An influence that can precede the manipulation of the data itself. I illustrate this by examining a recent dispute over meta-analysis results (Lieberman and Eisenberger 2015; Lieberman 2015; Yarkoni 2015). The dispute, I show, rests in part on how the original paper promoting the technology frames its value interacts with the hypotheses the users adopt it to investigate.

References

Aktunc, E. M. [2014]: ‘Severe Tests in Neuroimaging: What We Can Learn and How We Can Learn It’, Philosophy of Science, 81, pp. 961-73.

Anderson, M. [2015]: ‘Mining the brain for a new taxonomy of the mind’, Philosophy Compass, 10, pp. 68-77.

Lieberman, M. D. [2015]: ‘Comparing Pain, Cognitive, and Salience accounts of dACC’. Available at: https://www.psychologytoday.com/blog/social-brain-social-mind/201512/comparing-pain-cognitive-and-salience-accounts-dacc

Lieberman, M. D., and Eisenberger, N. I. [2015]: ‘The dorsal anterior cingulate cortex is selective for pain: Results from large-scale reverse inference’, PNAS, 112(49), pp. 15250-15255.

Mole, C. and Klein, C. [2010]: ‘Confirmation, Refutation, and the Evidence of fMRI’, In Stephen Hanson & Martin Bunzl (eds.), Foundational Issues in Human Brain Mapping. Cambridge: MIT Press. pp. 99-112.

Poldrack, R. A., Baker, C. I., Durnez, J., Gorgolewski, K. J., Matthews, P. M., Munafò M. R., Nichols, T. E., Poline, J., Vul, E., and Yarkoni, T. [2017]: ‘Scanning the horizon: towards transparent and reproducible neuroimaging research’, Nature Reviews Neuroscience, 18, pp. 115-126.

Roskies, A. [2010]: ‘Saving Subtraction: A reply to Van Orden and Paap’, British Journal for the Philosophy of Science, 61, pp. 635-65.

van Orden, G. C., and Paap, K. R. [1997]: ‘Functional Neuroimages Fail to Discover Pieces of Mind in Parts of the Brain’, Philosophy of Science, 64, pp. S85-94.

Wright, J. W. [2017]: ‘The Analysis of Data and the Evidential Scope of Neuroimaging Results’, British Journal for the Philosophy of Science.

Yarkoni, T. [2015]: ‘Still not selective: comment on comment on comment on Lieberman and Eisenberger (2015)’. Available at: http://www.talyarkoni.org/blog/2015/12/14/still-not-selective-comment-on-comment-on-comment-on-lieberman-eisenberger-2015/

13:50
Kathleen Connelly (University of California San Diego, United States)
How Philosophers of Moral Responsibility Can Draw on Psychology Morally and Responsibly

ABSTRACT. Philosophers of moral responsibility have, particularly recently, turned to empirical psychology to inform arguments about who is and who is not rational enough to be blameworthy. One instance of this trend is the recent attention paid to questions of when people with mental disabilities are morally responsible for their actions. The more we’ve learned about mental disability, mental illness, and neuroatypicality, and the less we’ve stigmatized these conditions, the more it becomes clear that people we might once have blamed (or who might have blamed themselves) for their harmful behavior are in fact not at fault at all. This is a step forward, as blame often harms its objects, and is even – on some views – aimed at harming them. But it’s clear there’s risk of injustice here as well. Holding people morally responsible for their actions is an important element of treating them as having agency, which is an important element of respecting them. To this end, this presentation will explore how philosophers can approach questions of mental disability and moral responsibility in a thoughtful, critical way. This must be informed by empirical science but also—perhaps more importantly—by a historical and sociological understanding of how this science has been abused to limit people’s autonomy in the past and, indeed, is still so abused in the present. I’ll take as a case study what some philosophers have said about adults with high-functioning autism. In particular, I’ll discuss the argument David Shoemaker has made that these adults are not fully morally responsible. In response, I’ll draw on sources by people with autism that maintain they can recognize the relevant reasons. I’ll argue that they therefore are morally responsible. Furthermore, I’ll demonstrate that to persist in failing to appropriately blame people with autism is to patronize them – to do them a particular sort of disrespect. I’ll conclude by sketching ways that moral philosophers can usefully draw on empirical psychology while maintaining a view of all agents involved as fellow subjects, not objects of study, even when the science itself fails to do so.

14:10
Dan Steel (The University of British Columbia, Canada)
Itai Bavli (The University of British Columbia, Canada)
Aaron McCright (Michigan State University, United States)
Chad Gonnerman (University of Southern Indiana, United States)
Gender and Scientists’ Views about the Value-Free Ideal

ABSTRACT. In light of the role assigned to the value-free ideal by many philosophers of science, we might expect it to be the predominant view among professional scientists. Previous empirical work probing the attitudes of professional scientists, however, suggests that the value-free ideal isn’t the majority view, and this work provides some reason to think that female scientists in particular are less likely than male scientists to embrace the ideal. In this paper, we add to the growing empirical record. More fully, after reviewing previous work on scientists’ attitudes on the value-free ideal, we report the results of a survey distributed to the science faculty at a major research university. It is a survey that we designed to overcome the limitations of previous empirical work on these issues as well as to provide a more refined picture of the ways in which scientists might agree or disagree with the value-free ideal. Our results strengthen previous findings. They also help to point the way to various hypotheses about how gender might be related to views regarding the value-free ideal. The one that we focus on in this presentation is what we call the gender social hypothesis, the possibility that gender socialization tends to make women more inclined to prosocial behavior than men, which results in female scientists’ being more likely to favor linking science to prosocial aims, such as promoting human welfare or social justice. We end by discussing some limitations and further implications of these findings.