VMST 2022: VALUES IN MEDICINE, SCIENCE, AND TECHNOLOGY CONFERENCE 2022
PROGRAM FOR SATURDAY, MAY 21ST
Days:
previous day
all days

View: session overviewtalk overview

09:45-10:45 Session 9: Artificial Intelligence and Machine Learning
Location: JO 4.614
09:45
Superstitious AF: Our Faith Beyond Understanding in Artificial Intelligences

ABSTRACT. In Kazuo Ishiguro’s 2021 novel *Klara and the Sun*, the robot narrator Klara is an Artificial Friend (AF) bought to “save from loneliness” Josie, a sick girl who lives an isolated existence with her mother. As Josie’s mysterious illness worsens (even her doctors are stumped), the solar-powered Klara devises a plan to ask the Sun (provider of all vitality and personified by Klara throughout the novel) to intervene and restore Josie’s health. Klara’s limited physical abilities and knowledge mean she must often enlist humans (Josie’s neighbor, Ricky; Josie’s father, Paul) to aid in her plan. Yet, fearing the Sun could withhold his aid if her plan were made public, Klara refuses to explain the specific details of her plan to her human helpers. What is remarkable is the degree to which the humans trust Klara and help her in increasingly complicated and dangerous undertakings with no knowledge of what she intends, other than that she believes it will make Josie well. Part of this is surely their own desperation to save Josie. But we can see how humans in an increasingly technological society could easily be drawn into carrying out an artificial intelligence’s plans, in part because AIs are thought to possess knowledge (such as pattern recognition) beyond our abilities to comprehend.

The novel thus adds a new wrinkle to the debate around the value of *explicability* in AIs, that is, the importance of receiving a “factual, direct, and clear explanation of the [AI] decision-making process” (Floridi et al 2018). Explicability is considered important to maintaining meaningful human control over AI algorithms, especially so-called “bottom-up” machine learning AI, which can produce potentially biased and harmful results based on factors that could be both “emergent and opaque” (Umbrello and van de Poel 2021). While there is legitimate debate over the value and necessity of explicability in machine learning scenarios (see especially Robbins 2019), the novel shows the degree to which we could become captive to inexplicable AI decisions that seem based more on correlation than causation.

An important part of Klara’s value as an AF is her observational and learning abilities which enable her to “see things the rest of us [humans] can’t.” The novel shows how, as general AIs become more available for private use, our ability to trust their decisions becomes increasingly important. At the same time, we see how the burden could fall on us to place our “faith” in opaque AI decisions based on bottom-up machine learning from which a “human-friendly semanticisation” (Floridi 2018) is not necessarily forthcoming or even possible. The novel itself is an exercise in AI explicability, as the narrator Klara provides some explanations to the reader for her decisions. But the novel shows how even Klara’s more explicit reasoning could be unconvincing to her human interlocutors, one of whom wonders aloud (as the reader might also wonder) if Klara’s secret plan is merely some form of “AF superstition,” at heart a misunderstanding of correlation and causation.

10:05
Machine Learning and Reflective Agency

ABSTRACT. One of the most powerful and controversial applications of machine learning (ML) has been to capture user behavioral data from retail websites, social media platforms, and other digital apps and to translate this data into behavioral predictions. This technology enables highly targeted consumer marketing and drives recommender systems, which take a user’s past digital behavior to make predictions about what else that same user is likely to desire, and thus consume, subsequently. I will use the blanket expression, ‘behavioral data ML systems,’ to describe such systems. The effect of such systems is, at times, truly remarkable, leaving the impression that somehow “the internet is reading my mind.”

In sense, of course, it is. Insofar as our digital behavior (e.g., browsing websites, making online purchases, streaming music, providing GPS location) is a set of actions, and insofar as our actions express a set of desires, these ML systems are indeed discovering our desires — and in turn giving us more of what they predict we want. While many have argued that ML applications of this kind threaten user autonomy, my focus in this paper is elsewhere, not on autonomy exactly but rather on the relationship between behavioral data ML systems and some central features of reflective agency itself.

I argue that behavioral data ML systems have the power both to undermine and to support a user’s reflective agency. Though such systems can impede the expression of meaningful agency, they can also be tools that foster self-knowledge, even if this latter effect is a by-product of their design and intended purpose. Exploring this dynamic sheds light on an important and evolving feature of AI-human interaction, one whose study holds promise for improving our understanding of AI and ourselves alike.

The first claim I advance is that ML captures only a user’s effective first-order desires but that reflective agency is a function of the dynamic between one’s first-order desires and one’s higher-order desires. As such, these ML systems can confine users to a feedback loop informed by desires that these users may or not endorse from a reflective perspective; to put the point dramatically, such systems may trap us in a prison of our own (first-order) desires, failing to acknowledge, let alone promote, the “aspirational” selves we may seek to realize. This feature of behavioral data ML systems thus undermines reflective agency.

The second claim I advance, however, takes this same feature of behavioral data ML systems and shows how it can foster a kind of self-knowledge that enhances reflective agency: by revealing our effective first-order desires, ML systems can bring to light mental states and motivations that may be opaque to mere introspection. By uncovering one’s “actual” self — and how it does or does not align with one’s “aspirational” self — these systems can foster both self-knowledge and the discovery of one’s core values, those beliefs or desires one can truly avow. This feature of behavioral data ML systems thus supports, or at least has the capacity to support, reflective agency.

11:00-12:30 Session 10: Medical Practice and Practitioners
Location: JO 4.614
11:00
Nurses’ Workplace Social Capital: The Ethical Bridge Between Technology and Delivery of Healthcare Services.

ABSTRACT. Application of technology into practice and delivery of medicine (e-medicine) have been accepted and projected in becoming a primary pathway for the delivery of healthcare services. Healthcare policy analysts are proponents of e-medicine because its potentials in cost reduction and access improvement to healthcare services. Despite the necessities of e-medicine, its potential negative side effects on workplace social capital should be addressed with the objectivity of assessing means to develop effective strategies to reduce, if not minimize, these side-effects. The COVID-19 pandemic has been a natural experiment demonstrating the importance of social and psychological daily human interactions, particularly in the service industry including healthcare. The delivery of effective and efficient healthcare services and the best patient-outcomes were facilitated by technology but relied on cooperative and constructive interactions among healthcare professionals from a spectrum of expertise. Among all healthcare professionals, nurses because of their extensive time interactions with patients carried the heaviest load of responsibilities in ascertaining the best outcomes for patients. The value of nurses’ social capital in efficiency and effectiveness of the nursing workforce was well demonstrated during the peak of COVID-19 epidemic; workplace social capital mitigated psychological stress and fortified professional identity of the nursing professionals, particularly those at the front line.

Nurses have been ranked as the most trusted professionals. Trust is the tenet of ethics in the delivery of healthcare services and is a main attribute of nurses’ workplace social capital. Nurses’ workplace social capital has been defined as the relational network configured by respectful interactions among nurses and between the other healthcare professionals. These interactions are characterized by the norms of trust, reciprocity, shared understanding, and social cohesion. This relational network contributes to creating a healthy workplace, which is fostered and fortified by effective communication, active group engagements and a supportive leadership. This definition underlines the five major characteristics of nurses’ workplace social capital: 1) Relational Network; 2) Trust; 3) Reciprocity; 4) Shared Understanding and 5) Social Cohesion. The current emphasis and over reliance on e-medicine can mitigate the importance of nurses’ workplace social capital. This mitigation eventually can exert negative side effects on the value and importance of human interaction in the delivery of healthcare services. To illustrate our point, we use the notion of effective human communication which relies on 10% of verbal and 90% on body language. Effective communication is an important antecedent for the nurses’ workplace social capital to root and to flourish within a healthcare setting. Over reliance and too much emphasis on e-medicine, creates a barrier in the ability of the nursing professionals not only to effectively communicate with their patients, but also among themselves. Trust is the antecedent of nurses’ social capital and nurses’ social capital is the ethical bridge between technology and delivery of healthcare services.

11:20
Against medical capitalism: The pitfalls of patient-centred care in palliative medicine

ABSTRACT. With the systematization of healthcare in the nineteenth and twentieth centuries into Foucault’s clinique, the privileged position of the medical professional—with all its intersecting implications relating not just to knowledge-as-capital but also to gender, race, class, and ability—cemented the western tradition of medical paternalism. In medical humanities, the consensus today is that this model, which ranks the doctor’s expertise above the wishes and understandings of the patient, is flawed and unjust. As a result, a new paradigm has emerged, attempting to place the autonomy and input of the patient on seeming equal footing with the expertise of the medical professional. “Patient-centred care” is the new slogan of the helping professions, and, due to its seeming correction of the prior model, remains largely unexamined as the ideal, progressive, and obvious choice.

My paper examines palliative medicine and the western travails of medicalized dying as the functional limit of patient-centred care. While patient-centrism appears to reform medical paternalism—in, for instance, seminal work on end-of-life care, like Elisabeth Kübler-Ross’s On Death and Dying (1969)—I argue that, upon deeper consideration, it proves to be another finger on the same hand, both sculpting and sculpted by our current industrial-capitalist moment. The hierarchy of doctor over patient is not dismantled by patient-centred care, but rather, is given a new subject. Nowhere more than in palliative medicine, the individualist imperative of patient-centred care often contributes to isolation, corruption of familial and cultural continuity, and a kind of deforestation of the patient’s emotional landscape.

Calling on the work of Judy Z. Segal, Alan Bleakley, and Stephen Jenkinson, my paper explores the mythopoetic poverty of dying in a capitalist, patient-centred system, and incites development of a decentralized, communal model. By shifting from individualism towards a more social medicine as related to western medicine’s supposed goal—avoiding or postponing death—we might come to see medical care as a whole more critically and ethically.

14:00-15:30 Session 11: New Approaches to Values in Science
Location: JO 4.614
14:00
Developing Measures of Public Perceptions of Values in Science
PRESENTER: Dan Hicks

ABSTRACT. Philosophers working in science, values, and policy have sometimes made arguments that either depend on claims about people's views on values in science, or have implications for such views. For example, there's an ongoing debate about transparency and science communication. Many philosophers of science, such as Elliott and Resnik (2014), have argued that transparency about the role of values in science is key for warranting public trust in policy-science science (Jordan et al. 2011, Elliott and Resnik 2014, de Melo-Martín and Intemann 2018). But John (2017) has argued that, insofar as there is a mismatch between the value-free "folk philosophy of science" and the value-laden reality, transparency about values in science is likely to undermine, not bolter, trust in science. Kovaka (2019) hypothesizes that this kind of mismatch might contribute to climate skepticism, or even be intentionally exploited to manufacture doubt about climate change. These philosophical debates have important implications for risk assessment and other policy processes, but depend on empirical questions about the effects of transparency on public trust in science. Elliott et al. (2017) used experimental methods to examine the effects of transparency disclosures by scientists; but their results were inconclusive.

For several decades, psychologists and education researchers have investigated public understanding of the nature of science. For example, Weisberg et al. (2020) — a collaboration involving both psychologists and philosophers of science — examined how understanding of the nature of science might play a role in the public controversies surrounding climate change and evolution. But, as far as we have been able to tell, empirical research on the nature of science has almost never investigated views on values in science.

We present a collaboration between a philosopher of science and psychologist to develop a values in science scale [VISS], a psychological instrument to probe research participants' views on values in science. Items on the VISS were designed to correspond to the major theoretical views developed in the science, values, and policy literature, such as argument from inductive risk, the aims approach, and standpoint theory. We believe the VISS will be useful for both philosophers and empirical scholars interested in the role of values in science. Our initial study to develop the VISS also incorporates a high-powered replication of Elliott et al. (2017).

As of the deadline for this proposal, we are in the middle of data analysis. In our presentation, we plan to focus on an exploratory factor analysis of the VISS items; discuss the results of the replication attempt for Elliott et al. (2017); and solicit suggestions for continued development of the scale.

# References #

Elliott, K. C. & Resnik, D. B. Science, Policy, and the Transparency of Values. Environmental Health Perspectives (2014) doi:10.1289/ehp.1408107.

Elliott, K. C., McCright, A. M., Allen, S. & Dietz, T. Values in environmental research: Citizens’ views of scientists who acknowledge values. PLOS ONE 12, e0186049 (2017).

John, S. Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty. Social Epistemology 32, 75–87 (2017).

Kovaka, K. Climate change denial and beliefs about science. Synthese (2019) doi:10.1007/s11229-019-02210-z.

de Melo-Martín, I. & Intemann, K. The Fight Against Doubt: How to Bridge the Gap Between Scientists and the Public. (Oxford University Press, 2018).

Jordan, C., Gust, S. & Scheman, N. The Trustworthiness of Research: The Paradigm of Community-Based Research. in Shifting Ground: Knowledge and Reality, Transgression and Trustworthiness 170–190 (Oxford University Press, 2011).

Weisberg, D. S., Landrum, A. R., Hamilton, J. & Weisberg, M. Knowledge about the nature of science increases public acceptance of science regardless of identity factors. Public Underst Sci 0963662520977700 (2020) doi:10.1177/0963662520977700.

14:20
The Nature of Values in Science: What They Are and How They Guide

ABSTRACT. Much philosophical talk of values in science has been underwritten by a picture of values as intrapsychic causative agents. On this view, values are psychological attributes of inquirers: they are what psychologists call ‘personal values’. While empirical research shows that self-reported personal values are moderately predictive of attitudes and behaviors, their pathways of influence remain poorly understood. What is known suggests that the degree and character of a value’s influence on action is finely sensitive to situational factors and other personal attributes such as social norms, personality traits, and an agent’s value-laden characterization of their behavior (Rohan 2000, Jiga-Boy et al 2016, Sagiv and Roccas 2017, Sagiv et al 2017). Personal values guide inquiry only in complex interaction with other “decision vectors” (Solomon 2001).

This empirical work motivates the following conclusions about values in science, conceived as personal values. Philosophers may have overestimated the degree to which scientists can choose to act on their values, including to form beliefs that reflect them. It has been argued that where value-laden beliefs are formed unreflectively, they do not reflect scientists’ personal values (Magnus 2022). Here I argue that, even where a belief reflects a scientist’s personal values, we should not infer that she steered her belief to reflect her values. Personal values causally influence belief formation only in complex interaction with factors often not up to us. Philosophers may have also overestimated the degree to which a scientist can retrospectively know that a personal value influenced her action. She can’t infer that her value was causally responsible for her choice just because that value and choice aligned, for experiments show that conflicting personal values are sometimes associated with the same behavior; the causative agent is underdetermined by the behavioral outcome.

In light of these conclusions, I argue for a transformation in our values talk. Philosophers of science have been confused, not helped, by imagining values as intrapsychic causative agents. Rather than conceive of values as personal values, we ought to conceive of them as pursuitworthy goals that range in contexts of pursuit (Kitcher 2011). Values ‘guide’ scientific practice, not in the sense that psychological features like traits and cognitive biases guide action: by causally influencing scientists to behave one way or another. They guide science in the sense that scientists’ reflective or unreflective pursuit of certain goals best explains their pattern of attitudes and behaviors over time.

If we conceive of values as goals, certain research questions are precisified. ‘Do feminist values belong in science?’ concerns which goals scientists should pursue, as opposed to what psychological attributes scientists should have. Following Kitcher’s (2011) taxonomy of value schemes, we might distinguish between probative, cognitive, and broad feminist values. Probative feminist values are what have been called feminist theoretical virtues, such as ontological heterogeneity and complexity of interaction (Longino 1995, 1996). Cognitive feminist values are kinds of knowledge feminists have enjoined science to produce, such as knowledge that reveals gender (Longino 1995), illuminates women’s health (Rosser 1994), or is socially responsible (Kourany 2010). Broad feminist values are political goals of feminist movement, such as reducing global inequality and abolishing cisheterosexist oppression. ‘Should feminist values influence science?’ is reconfigured as the political question, ‘should scientific inquiry have feminist goals?’

14:40
Vital dialogues: Georges Canguilhem on the science-politics relation

ABSTRACT. One of the indirect effects of the Coronavirus pandemic is the central role that the relationship between biomedical sciences and politics has assumed in academic and public debates. In the last two years, we have seen people accusing politicians of voluntarily ignoring epidemiological evidence and guidelines, whereas others have been questioning how far politics should follow science’s directives to the detriment of other kind of considerations. I claim that a fruitful, but overlooked, method to ethically address how this relationship affects our lives can be found in the epistemological history of science of Georges Canguilhem (1904-1995, French historian and philosopher of science).

In my talk, I will first present Canguilhem’s definition of life as a normative activity – namely, as a constant creation of norms through which the organism evaluates and acts upon the surrounding environment – and the following conception of science as one of the many forms this normative activity assumes in the human species (alongside politics, industry, technique, morals, etc.). One of the ways humans can relate to their milieu consists in methodically reducing reality to quantifiable features and linking them through necessary laws. However, in contrast to the other activities, science is normed by the value of truth, which means that its distinctive feature is to permanently put its results into question and rectify its achievements.

Having set this ground, I will show how this philosophy can help address the conference’s main topic: the reciprocal influences between values and science. First of all, in Canguilhem’s view, science cannot but be in dialogue with the other ways in which humans relate to their milieu because of their common root in the normative activity that life is. This means that it is legitimate for science to adopt problems from other domains and attempt to solve them by formulating new scientific concepts. However, Canguilhem also recognises that this dialogue can take ‘illegitimate’ forms: sometimes, the formation of scientific concepts can be influenced by political ideologies (e.g. 19th century cellular theory); other times, scientific concepts or theories can be adopted to legitimise socio-political or economic projects (e.g. social Darwinism).

Canguilhem explicitly formulates and employs an epistemological-historical method to identify the overdetermination of extra-scientific values and ideologies over scientific concepts in his history of biomedical sciences. However, I claim that his philosophy of life implicitly provides this method with a criterion to ethically judge the relations between science and politics. The crucial condition for this judgment, I claim, is the way in which they affect individual and collective normative potential – namely, the capacity to create new and better norms to relate to the milieu. I will then conclude by suggesting how this criterion can offer a more solid standpoint from which to evaluate some of the most common responses to the pandemic, than the concepts of freedom or safety (both overly susceptible to different interpretations).

16:15-17:15 Session 12: Director's Address
Location: JO 4.614
16:15
Values in Science: Past, Present, and Future

ABSTRACT. In this talk, I seek to give a high-level view of where we've been, where we're at, and where we might be going in the field of values in science. First, I will tell the story of four traditions within philosophy of science for thinking about values in science: the pragmatist and Marxist traditions of the early/mid twentieth century, the feminist tradition arising in the 1970s, and a tradition focused on policy and risk assessment beginning from the 1990s. Each of these traditions begins from a different intellectual background or philosophical orientation, and each responds to different though overlapping concerns. These different traditions lead to different approaches to values in science that sometimes complement and sometimes conflict with one another. 

Next, I discuss and critically assess what I take to be some of the most important recent developments in the field. I will wrap up with my take on some important directions the field could go in the future. 

17:30-19:00 Conference Dinner

This is a ticketed event