VMST 2019: 9TH ANNUAL VALUES IN MEDICINE, SCIENCE, AND TECHNOLOGY CONFERENCE 2019
PROGRAM FOR FRIDAY, MAY 24TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-09:30Coffee & Registration
09:30-11:00 Session 6: Artificial Intelligence and Emergent Technologies
09:30
Ethics, Action, and the Space of Technological Possibility

ABSTRACT. This paper is an early foray into a new area of research for me, and has 2 main goals. The first is to explore what might be particularly philosophically interesting about the ethics of emerging technologies. In brief, I am curious about whether or not the ethics of emerging technologies are a species of applied ethics or if, on the other hand, it raises novel philosophical concerns. I take guidance here from Shannon Vallor, who has recently argued that contemporary emerging technologies produce a condition that she calls “technosocial opacity” that pose insuperable objections to approaching the ethics of emerging technology from the perspective of applied ethics, whether deontological or utilitarian. Vallor contends that this leaves virtue ethics as a uniquely qualified moral theory for addressing the moral issues raised by 21st century technologies.

I will neither be challenging Vallor’s positive account of virtue ethics, nor defending utilitarianism, but rather arguing that her critique of deontology on the basis of technosocial opacity is either too strong, or underdeveloped. More specifically, her argument implies that deontological ethics cannot handle any technosocial novelty. This seems implausible, and so it seems that something needs to be said about how the novelty of emerging technologies is especially opaque. The rest of the paper is a friendly amendment to Vallor’s work in this direction.

I contend that emerging technologies create genuinely new possibilities for action. The novelty of this position lies in the fact that, as opposed to theories that take new technologies to expand human capacities (even radical theories that claim technologies are literal extensions of human capacities, abilities, organs, etc.), I will be arguing that emerging technologies produce genuinely new action-types, such that they make possible intentions that weren’t possible before. This is one way in which the deontologist will find their views inadequate in the face of technosocial opacity. 

In making this argument, I develop some insights from Heidegger, Latour, and postphenomenology against the instrumental paradigm of thinking about technology, and David Velleman’s recent work on relativism concerning the construction of action-types, along with other work in the philosophy of action. I conclude by making two suggestions: first, that the ethics of emerging technology is first and foremost an ethics of design, and second, that Vallor may well be right that the best way to frame the ethics of design is in terms of what it is to flourish.

09:50
Cyborg Maintenance: The Invisible Work of the Technological Bodymind

ABSTRACT. The lived experiences of those who live lives as cyborgs are ignored by those who most wish to be cyborg. In this presentation, I will push back against the utopic, teleological imaginaries of the Transhumanist movement using a daily concern for actual cyborgs: maintenance. Scholarly work on maintenance is sparse, relatively recent, and generally focused on infrastructure and large technological systems. In this talk I will merge the work done on these large technological systems with the biopolitical work by disability studies scholars and activists. I will show that the common narratives of innovation that insist that technology becomes faster, more efficient, and more durable over time are false, and how upkeep rather than upgrading will be the norm for cyborg bodies. The interfaces between technology and the body are sites of significant breakdown as each degrades the other. Moisture, acidity, and healing processes make it difficult for technological systems to last long within the body, and the intrusion of foreign material into the human body triggers scarification, fibroses, infection, and systemic reactions. Disabled people regularly navigate these concerns, and their description of the sorts of work they do to keep up their techno-bodyminds does not match well with the transhumanist narrative. The concerns of cyborg maintenance need to be considered as technologists continue to produce technological interventions into our bodies, and disabled folks for whom such intervention is already a reality can tell us a lot about how that needs to happen.

10:10
Integrating the Second Person Standpoint into AI-supported Environments

ABSTRACT. The progress in AI let one wonder whether AI will ultimately diminish human competencies. Current AI resembles an “„electronic idiot savant, which excels at one particular mental task but is baffled by others” (The Economist 2018). This may be exemplified by computational pathology which offers Big Data and machine learning based approaches being as good as humans in analyzing biopsies but lack any interpersonal skills.

AI systems are not (yet) embodied agents experiencing the world via the senses and the actions they perform. They are not truly embedded in social environments. They are used to optimize processes while adapting to new circumstances at the same time.

This leads to the question what role the humans may and should play in AI-supported environments. First, normative control should be exercised supported by responsible innovation approaches. Second both the developers and the participants of such systems should take care that these environments are not optimized for machines but for humans as individuals and as groups, taking into account the unique interpersonal relationships humans can form because they can relate to each other “second-personally” (Darwall 2006). According to Darwall (2006) respect and accountability are based on “second personal competence”. This capability involves “instrumental rationality and a certain degree of self-consciousness as well as being able to step back from one’s own perspective, and to project into others’ perspective and to relate to one another second-personally” (Darwall 2009, p. 10). In Darwall’s view any second-personally competent agent has authority as a member of the moral community. Since the authority is mutual between peers, this means to direct someone but through one’s own free will. Darwall was inspired by Fichte’s summons (Aufforderung) (Darwall 2006, pp. 252) yet his concept is clearly modern. It results in reciprocal demands, claims and entitlements between humans.

References

Darwall, Stephen 2006 The second person standpoint: morality, respect and accountability, Harvard University Press Darwall, Stephen 2009 The second person standpoint: An Interview with Stephen Darwall, vol.XVI, The Harvard Review of Philosophy The Economist 2018 AI, Radiology and the Future of Work, The Economist June-9-18, pp. 15-16

11:00-11:30Break
11:30-12:30 Session 7: Lightning Talks 2
11:30
The Demarcation Problem in philosophy of science: the scientific predicate, a socio-political status or an epistemic assessment?

ABSTRACT. What do we say when we affirm that a theory, a discipline or a decision is scientific? When we use the scientific predicate, do we learn something about the epistemic nature of the object we are describing, or do we just say something about its social and political status? If using the scientific predicate leads to a properly epistemic observation, the demarcation question is the responsibility of the philosopher. If the scientific predicate is only the mark of a social decision, then it is no longer within the domain of philosophy (Laudan, 1983). Nevertheless, if we adopt a value-laden ideal (Douglas, 2009), Laudan's argument is no longer relevant. The demarcation problem reappears. The chief aims of this study are: i) to discern which values are involved in the demarcation question, ii) to determine who is best suited to deal with the demarcation problem, iii) to refine our understanding of the distinction between real science and pseudoscience. To do so, we will: 1° use Heather Douglas's contributions to philosophy of science so as to reply to Laudan, 2° improve in several ways the Topology of Values in Science proposed by Douglas (2009) in order to offer a better understanding of unacceptable practices (politicized science, fraud-science, junk-science, pseudoscience, bad-science) and to identify more precisely steps of the research process during which it is appropriate to pay attention to non-academic discourses (with two NIH consensus conferences to illustrate our point).

Main References Only:

DOUGLAS, H.E. (2009). Science, policy, and the value-free ideal. Pittsburgh: University of Pittsburgh Press.

LAUDAN, Larry (1983). The Demise of the Demarcation Problem, Physics, Philosophy and Psychoanalysis, 111-127.

SOLOMON, M. (2007) The social epistemology of NIH consensus conferences.’ In: H. Kincaid and J. McKitrick (eds) Establishing medical reality: Methodological and metaphysical issues in philosophy of medicine. Dordrecht: Springer.

11:30
Why value-ladenness of the behavioral sciences should lead to rethinking the widespread views on their policy relevance.

ABSTRACT. Insights from the behavioral sciences are reshaping public policy around the world (Jones, Pykett & Whitehead 2013). Behavioral approaches are drawn upon in a variety of policy fields such as health and environmental policy, labor regulations, and consumer protection. The behavioral policy units have been established worldwide, in the UK, US, Germany, France, Australia, Japan and Singapore, as well as found at the World Bank, the OECD, and the European Commission. The application of behavioral research to policy is promoted as a way of making policies more effective, that is, formulating policies which achieve policymakers’ aims (Thaler & Sunstein 2008; Shafir 2012). Proponents of the application of the behavioral sciences to policy believe that behavioral research provides the scientific evidence needed to design effective policies. In particular, they claim that a subset of the behavioral sciences (cognitive psychology and behavioral economics) they rely on offers an ‘adequate’, ‘accurate’, or ‘realistic’ account of behavior and therefore it should be a basis of policy design. However, this claim is mistaken. My analysis is inspired by Longino’s most recent book (2013). The type of incommensurability that Longino demonstrates (and that characterizes most behavioral research, as I argue) calls into question the idea that there are epistemic reasons for treating some of the approaches within the behavioral sciences as the ‘adequate’, or ‘accurate’ ones. Hence, the justification for using some insights from the behavioral sciences in policymaking is questioned as well. We have to rethink the widespread view on which findings from the behavioral sciences could, and should, inform policy and why they are treated as policy-relevant. In this context it is crucial to understand the value-ladenness of behavioral science because, as I intend to show, the values embedded in cognitive psychology and behavioral economics make these approaches recently enter the policy contexts.

11:30
"Value-inertia" in science-based assessments of extremes

ABSTRACT. Worst-case scenarios, under various names of “plausible upper-bound,” ”credible worst-case,” ”practical worst-case,” or ”extreme scenario” are essential in many policy contexts. However, while worst-case scenarios are often critical in practical decision-making, how values and value judgments influence them are often unappreciated. The philosophy of science literature analyzes the role of values in science but has so far not given much attention to expert assessments, and even less to assessments of extremes.

In this paper, I examine the role of values in the flow of information from science to decision-making involving worst-case scenarios. I argue that there are two minimal requirements of the legitimate use of non-epistemic values in expert assessments: The first is a requirement of transparency, that any value-judgments ought to be made transparent so that stakeholders can take them into account when making decisions based on the information from the experts. The second is a requirement of context-dependency, that value-judgments ought to be adapted (or be ”apt”) to the context of the assessment.

However, these requirements seem difficult to satisfy in practice for at least some cases of expert assessments of extremes, as value-judgments are neither transparent nor context-dependent. Instead, previous and external value judgments tend to become invisible and influence the results in ways which are difficult to change. I suggest that this tendency needs to be highlighted, and that it is called ”value-inertia.” It is shown, through case studies of worst-case scenarios, how value-inertia can influence expert assessments from two directions. The first is upstream, from previous scientific studies and syntheses. The second pathway is downstream, from previous assessments made in other contexts. Finally, the implications of upstream and downstream value-inertia are discussed, and it is proposed how the influence of value-inertia can be mitigated.

11:30
Utilizing Science in Society: Toward a Stakeholder Model of Science in an Aggregative Democracy

ABSTRACT. In this paper, I argue for the use of the stakeholder model of science in aggregative democracies. I first discuss Roger A. Pielke, Jr.'s four idealized roles for scientists in regard to policy and politics: the pure scientist, the science arbiter, the issue advocate, and the honest broker of policy alternatives. I show that the linear model of science facilitates the first two roles, which are often applied erroneously, causing deleterious consequences for both science and politics. On the other hand, the stakeholder model of science facilitates the latter two roles, which are applicable in virtually all real-world cases that comprise value disputes and scientific uncertainty. Contra Pielke, however, I argue that, although political discourse is indeed necessary in cases of messy politics, it is not sufficient, as scientific information is also an important constituent of this kind of politics.

I argue that the linear model of science fosters three problems, all of which are intimately related and can be ameliorated by the stakeholder model of science. I then reject the common conceptual tendency to move from the stakeholder model of science to arguments in favor of deliberative democracies; I support a more "garden-variety" conception of aggregative democracy. I examine Henry S. Richardson's defense of deliberative democracies against the "de facto" and "de jure" objections, arguing that he fails to adequately respond to these serious objections. Regarding the former, I show that his principle of recoverability is either fantastical or superfluous and that his idea of meshing consequences via incompatible means is problematic. Regarding the latter, I show that the defeasibility of expert opinion fails to resolve the problem of deliberation ad infinitum and that it leads to a variety of truth relativism that is putatively antithetical to science.

11:30
The Value Freedom of Value-Laden Science: Perspectives on Inductive Risk and the Social Value Management Ideal

ABSTRACT. In this paper, I argue that inductive risk and Helen Longino’s social value management ideal of science, widely used to defend value-laden science, are in fact consistent with the value-free ideal of science (VFI). By VFI, I mean the claim that scientific inquiry ought to minimize the role that scientists’ social and political (SP) values play in the so-called ‘internal’ practices of investigation, such as justification, probability assignment, and theory choice. On the one hand, it is consistent with inductive risk to argue that SP values necessarily influencing internal decisions ought to be those of citizen stakeholders or elected policymakers, not those of scientists. It is also consistent with inductive risk to argue that value-free scientists would be superior to value-laden scientists: science would be better off were it practiced by agents for whom there is no epistemic uncertainty. Both arguments in different ways call for minimizing the role of scientists’ SP values in scientific reasoning. On the other hand, while Longino endorses the use and proliferation of scientists’ SP values in internal practices, she advocates a model of the ideal scientific community that refuses to privilege any particular SP values and therefore, I argue, privileges value freedom. Consider: any feminism of Longino’s ideal scientific community is toothless, for ‘built into’ its very social conditions and norms of discourse is a refusal to take a stand on feminism, exemplified by its tolerance of misogynistic viewpoints; I do my best to spell out what it means to be ‘built in’. Drawing an analogy between the community and individual, I show how this example evinces two general lessons. First, to have a value is to privilege the corresponding evaluative point of view. Second, a community refusing to privilege any SP points of view is a community privileging VFI, itself an SP value.

11:30
Recruiting Underrepresented Groups in Philosophy: A Longitudinal Study
PRESENTER: Daniel Hicks

ABSTRACT. In contrast with most other humanities fields, academic philosophy remains heavily dominated by white men. Schwitzgebel (2017) shows that women have received roughly 30% of bachelor's degrees in philosophy "since approximately forever." In the last several years, a number of survey studies have empirically examined the pipeline between undergraduate students taking their first philosophy class and going on to major in philosophy, examining factors that might make majoring in philosophy disproportionately unattractive to women and people of color (eg, Thompson et al. 2016). However, these studies typically use a one-off survey design, and do not follow students over time; and typically focus on students in one or a small number of philosophy courses, limiting their ability to analyze the effect of the instructor's gender and race.

This talk presents the results of a longitudinal study of philosophy enrollment. We use anonymized records covering every undergraduate student who took a course from the philosophy department at a large public research university between Fall 2005 and Spring 2015. We simultaneously analyze the effect of peer and instructor demographics, grade discrepancies (Thompson 2017, §2.2.1), and economic class on whether students go on to major in philosophy. We take an intersectional approach, examining potential differences in the effects of these factors across combined gender-race-ethnicity categories.

References

Schwitzgebel, Eric. 2017. “Women Have Been Earning 30-34% of Philosophy BAs in the U.S. Since Approximately Forever*.” December 8, 2017. http://schwitzsplinters.blogspot.com/2017/12/women-have-been-earning-30-34-of.html.

Thompson, Morgan, Toni Adleberg, Sam Sims, and Eddy Nahmias. 2016. “Why Do Women Leave Philosophy? Surveying Students at the Introductory Level.” Philosopher’s Imprint 16 (6). http://quod.lib.umich.edu/p/phimp/3521354.0016.006/1.

Thompson, Morgan. 2017. “Explanations of the Gender Gap in Philosophy.” Philosophy Compass 12 (3): e12406. https://doi.org/10.1111/phc3.12406.

11:30
The Ethical Dr. Frankenstein

ABSTRACT. Remember those old time movies of Dr. Frankenstein where he raises a man from the dead that later becomes a terrifying monster? Although this is an ultra-dramatized representation of a scientific experiment gone wrong, the fears and genius behind clinical research trials is a very real thing. Today the way human research studies are understood and governed are very different. The most imperative objective is “How will this make a difference to the lives of people living with severe diseases?” We have a passionate, long-term commitment to discovering and developing innovative medicines that transform the lives of people living with severe diseases by connecting with patients and their families around the world living with the physical and social burdens of severe disease to offer new perspectives, drive innovation, and offer the hope of a new generation of therapies that are helping to prolong and transform lives. In this presentation, I am going to demonstrate how technology has helped bring clinical research to the fore front of research and development by outlining the history of clinical research, the technologies that we currently use to get viable data, and how smartphones, apps, and satellites are propelling clinical research and development into the future.

11:30
A role for the history and philosophy of science in the promotion of scientific literacy

ABSTRACT. In a democratic system non-experts should have a voice in research and innovation policy, as well as in those policy issues to which scientific and technological expertise are relevant – like climate change, GMOs and emergent technologies. The inclusion of non-expert voices in the debate is both a requirement for truly democratic process and an important counter to the kinds of jargon and group-think that can limit the perspective of more exclusively expert discussions.

For non-experts to participate in a productive way does require a certain degree of scientific literacy. Yet in our present place of intensive specialization, access to understanding any one subfield or subdiscipline in the sciences requires years of study. Moreover, the relevant sort of literacy involves not simply familiarity with factual information, but some perspective on the goals, methods and practices that constitute knowledge formation in the scientific disciplines.

We have spent the last decade developing a syllabus, readings, and tools for teaching science literacy through the history and philosophy of science. These include assemblage of appropriate primary and secondary course materials, creation of cumulative assignments, developing technology resources to connect students to key events and figures in the history of science, and implementation of assessment methods that focus on skill and concept development rather than fact memorization or problem sets. Our poster will showcase these tools and provide attendees with specific suggestions for similar course practices they can implement at their own institutions.

In particular, we have found that coursework that familiarizes students with the how practices of knowledge formation in the sciences have developed over time has helped our students to:

1. Recognize that the methods of science are themselves developed through trial and error, and change over time. 2. Understand that different disciplines of science require different approaches and techniques, and will result in different levels of predictive uncertainty and different standards for what is considered a successful hypothesis. 3. Consider examples of scientific debate and processes through which those debates are resolved with the advantage of historical perspective. 4. Trace some of the unintended effects of the sciences on society and to identify where the social and cultural values of the scientists themselves played a role in their deliberations – and whether or not that had a negative epistemic effect.

12:30-13:30Lunch
13:30-15:00 Session 8: Values in Science 1
13:30
Machine learning, theory choice, and non-epistemic values

ABSTRACT. The aim of this paper is to support the claim that non-epistemic values are essential to assessment of hypotheses. Much of the current discussion of the influence of political, religious, and other non-epistemic values on empirical reasoning relies either on (i) illustrating how it happens in concrete cases, or (ii) discussing practical or politically loaded subject matters such as social science or biology. Such arguments are vulnerable to two objections. First, if non-epistemic values happen to influence reasoning only in specific cases, perhaps this only shows that people are sometimes imperfect; it doesn’t seem to show that non-epistemic values are essential to reasoning itself. Second, if the specific cases involve subject matters with obvious practical or political implications, then one might object that non-epistemic values are irrelevant for subject matters that are theoretical and not politically loaded. This paper supports the view that non-epistemic values are essential to assessment of hypotheses, using a theorem from machine learning theory called the No Free Lunch theorem (NFL). If my argument holds, it supports the view that the influence of non-epistemic values on assessment of hypotheses is: (i) not (solely) due to psychological inclinations of human reasoners; and (ii) not special to practical or politically loaded areas of research, but rather is a general and essential characteristic for all empirical disciplines and all areas of inductive reasoning.

13:50
Values in Science: Ethical vs. Political Approaches

ABSTRACT. Philosophical work in value theory offers two different ways of determining values: distinctly ethical and distinctly political approaches. These approaches often yield conflicting conclusions, since any meaningful commitment to democracy requires deferring to the wishes of the public in at least some cases in which the public makes a normative error. It is therefore important for those thinking about values in science to consider whether the value judgments required by science should be grounded in ethical or political reasoning.

In the first part of the paper, I show that although this issue is rarely discussed explicitly, philosophers more commonly take an ethics-oriented approach when thinking about values in science in general. The same can be seen even more clearly in discussions of specific value judgments — e.g. the choice of economic discount rate.

STS scholars more commonly take a political approach. They, however, typically restrict themselves to thinking of scientists as political agents in a descriptive sense. They therefore rarely tie their discussions to work in normative political philosophy/theory, which could be used to offer concrete recommendations to scientists about how they should navigate scientific value judgments.

In the second part of the paper, I try to more carefully clarify the difference between distinctly ethical and distinctly political approaches. Approaches rooted in ethics typically focus on substantive questions about which value judgements are correct or incorrect, or about which are well- versus poorly-supported by reasons. Approaches rooted in political philosophy typically work from a different set of concepts: procedure, legitimacy, representation, accommodation for opposing viewpoints, accountability, etc.

I (in contrast to most philosophers) favor a political approach to most values-in-science questions, and (moving beyond most STS scholars) believe that normative political philosophy can help scientists to navigate such issues. And so I conclude by considering what role the distinctly ethical arguments typical of philosophers can have, given a political approach to values in science. I conclude that they can serve two valuable roles. First, they can be seen as (merely) persuasive exercises. Second, and more interestingly, liberal political philosophies often crucially rely on a distinction between reasonable and unreasonable pluralism. The substantive ethical arguments of philosophers can help us see what lies within versus outside the range of reasonable disagreement.

14:10
The Heart of Scientific Inquiry: Affect in Pierre Duhem's 'Good Sense'

ABSTRACT. Following a long tradition that views emotion as antithetical to inquiry, contemporary philosophers of science have generally shown little interest in the role of affect (or ‘feeling’) in scientific reasoning (McAllister 2005; Kochan 2015). Within the discourse on values in science, where one most expects this topic to appear, the ideal of impartiality—that subjective preferences about what a scientist desires to be the case should not ground her evidence-based inferences—can seem, at first glance, to reinforce an emotion/cognition dichotomy. If one assumes that emotion underpins, or can even partly constitute, the vagaries of wishful thinking, prohibiting its impact on scientific reasoning naturally follows (e.g., McMullin 1982). Contrary to this diagnosis, I argue that the well-mined insights of late 19th/early 20th century physicist and philosopher Pierre Duhem include a rich and heretofore untapped vein uniquely positioned to illustrate the compatibility of impartiality with the legitimate influence of emotion on scientific inference. Unlike extant accounts (e.g., Stump 2007; Ivanova 2010), I highlight Duhem’s (1954/1914, 1991/1915) reliance on affect-laden language to describe his infamous notion of ‘good sense’ (bon sens), the intuitive faculty by which scientists recognize first principles, exercise valid rules of inference, and, most importantly, revise their theoretical commitments in light of inconclusive evidence. Accordingly, I propose a novel interpretation of good sense as an emotional faculty attuned to ‘epistemic feelings’ of the rightness, truth, or pursuit-worthiness of theories. Like emotion, good sense is partly characterized by its two-fold evaluative nature, assessing the salience of facts (i.e., fixing evidence) and assigning value to objects (i.e., appraising theories), and is reducible neither to cognitive beliefs nor conative desires (de Sousa 1987, 2011). For Duhem, a scientist’s ‘passions’, or private desires and biases, pose the real threat; when unhindered, good sense is an impartial and emotional judge.

15:00-15:30Break
15:30-16:30 Session 9: Medical Research and Practice
15:30
“Neutrality” as the Value-Free Ideal in Clinical Ethics Consults

ABSTRACT. In 1992 the Joint Commission of the Accreditation of Healthcare Organizations recommended that hospitals in the United States have some “mechanism” for responding to ethical issues in clinical care. Since that time, most hospitals have included an ethics committee or ethics consulting department to serve this role. However, the day to day activities and outlook of these bodies vary widely. One issue on which there is significant disagreement in the clinical ethics community is on the role of neutrality. While some see exhortations to neutrality in clinical ethics simply as a reminder to consider and give weight to perspectives other than those preferred by the hospital and clinicians, others claim a stronger role for neutrality. This stronger role for neutrality has much in common with the value-free ideal in science. This similarity can provide new routes for self-critique and improvement in clinical ethics consulting. In what follows I will describe and analyze the current literature on neutrality in Clinical Ethics Consulting, critique this literature on meta-ethical and practical grounds, analyze similarities and differences between this literature and that on the value-free ideal, and consider alternatives to the current conceptual schema, including considering impartiality as one value among others. In this way I hope to show how clinical ethics consultants can helpfully avoid some of the problems that the value-free ideal has posed in science.

15:50
Pragmatic progress and the improvement of medical knowledge for global health

ABSTRACT. The globalized privatization of scientific research has been both rampant and vicious for evidence-based medicine and the production of medical knowledge. Pharmaceutical companies now control the performance and funding of the majority of clinical trials and drug development strategies worldwide, and have strong financial incentives to keep unfavorable results confidential, squeeze patent revenues, and prompt doctor prescriptions through massive marketing campaigns. The aim of the paper is to offer a philosophical analysis of the crisis in medical knowledge, in terms of the pragmatic progress needed for pharmaceutical research to solve pressing epistemic and social health problems. If this reading is accurate, the analysis can guide the way forward by identifying a crucial mechanism needed to close the epistemic gap in current medical knowledge, which in turn serves as a criterion for filtering current and future proposals. The paper is divided as follows. First, I show that the drug market has led to a significant epistemic gap between the knowledge needed to address pressing social health issues and the knowledge produced following the demands of the global market. Second, I examine two competing notions of scientific progress. Following Kitcher (2012), I distinguish between teleological progress, i.e., progress understood as achieving or moving towards a general epistemic goal, such as truth or knowledge, and pragmatic progress, i.e., progress understood as solving a particular problem at hand. Using the notion of pragmatic progress, I suggest a new reading of the crisis in medical knowledge, which emphasizes the problems that clinical research is set to solve. Then I present two alternative ways to restructure medical research to fulfill this aim, illustrating how each can be implemented through real-world examples. The last section addresses a possible objection to the argument and exemplifies how the criterion can be used to filter undesirable proposals.