next day
all days

View: session overviewtalk overview

08:30-10:00 Session 2A: Symposium
Location: Macmillan 117
Sampling and Decision Making: How People Interact to Learn and Profit in Stochastic Multi-Stage Environments

ABSTRACT. Most decisions are made without complete information, where the information we have is often noisy and acquired at a cost—either through direct experience or information search—and our decisions today influence our predicament tomorrow. Whereas much decision-making research eschews it, the projects in this session embrace these complexities. Each is about how people interact with their environments by sampling outcomes and experiencing results, and how people make explore-exploit tradeoffs. These projects address when and why people act in accordance with normative models by characterizing the psychology and contextual properties of the environment that give rise to this sampling and choice behavior. Short abstracts appear below.

Nicholas Reinholtz (Colorado), Daniel M. Bartels (Chicago), Oded Netzer (Columbia), and Jonathan Levav (Stanford) Title: Variance Neglect in Sequential Price Search Stopping Decisions Abstract: Optimal stopping in price search requires sampling from a distribution of prices until the cost of taking another sample exceeds its expected return. Expected return depends on the variance of the price distribution, but people are insensitive to observed variance and instead rely on price magnitude when deciding to stop.

Bradley C. Love and Peter S. Riefer (UCL) Title: Coherency Maximizing Exploration in the Supermarket Abstract: Effective decision makers balance exploiting options that are currently preferred against exploring alternative options that may prove superior. Laboratory studies with objective rewards find that people are optimal and mimimize uncertainty when exploring. In contrast, with subjective rewards, we find that supermarket consumers maximize coherence, such that preferences follow choices.

Gaël Le Mens (Universitat Pompeau Fabra), Jerker Denrell (Warwick), and Balázs Kovács (Yale) Title: Information Sampling and the Environment: Application to the Effect of Popularity on Evaluations Abstract: After a poor experience, people are more likely to sample again a popular alternative than an unpopular alternative. Such sampling behavior is enough to lead to an evaluative advantage for popular alternatives. In an analysis of more than 100,000 restaurant ratings by more than 26,000 people, we find support for our sampling explanation. The influence of popularity on sampling behavior explains a sizable portion of the association between popularity and quality estimates.

Daniel J. Navarro and Ben R. Newell (University of New South Wales) Title: Skillful Bandits: Learning to Choose in a Reactive World Abstract: Some skills are quick to learn but offer only small rewards; others take longer to master but the benefits can be substantial. We examine a class of bandit problems with these characteristics and ask whether people and simple models can learn to hone their skills optimally.

Eric Schulz and Maarten Speekenbrink (UCL) Title: Learning and Decision Making in Contextual Multi-Armed Bandits Contextual Multi-Armed Bandit tasks confront participants with options described by a number of features that modulate an option's expected reward. We find that participants are best described by sampling-based Gaussian Process models in such tasks. This implies that their behaviour seemingly combines powerful inference with well-adapted sampling strategies.

Todd M. Gureckis and Alexander S. Rich (NYU) Title: Exploratory choice reflects the future value of information Abstract: Consideration of the future value of information is an important guide to effective exploratory choice. We show that people use information about the future number or frequency of encounters with a prospect to parametrically modulate their exploration, and discuss implications for individual and societal patterns of choice.

08:30-10:00 Session 2B: Symposium
Location: Friedman Auditorium
New paradigm, probabilities, pragmatics, and dual processing: Festschrift in honour of David Over’s 70th birthday
SPEAKER: Shira Elqayam

ABSTRACT. David Over’s contribution to the study of the psychology of reasoning spans more than a quarter of a century and a significant number of game-changing works, many of which concern the new paradigm in psychology of human reasoning. David’s signature mix of philosophical logic and experimental psychology has inspired generations of researchers, psychologists and philosophers alike. The new paradigm psychology of reasoning (as David dubbed it; Over, 2009), reflects a Kuhnian paradigm shift in our understanding of reasoning. Where the traditional paradigm focused on binary truth values as conceptualised in classical logic, the new paradigm sees reasoning as a type of decision making, involving probabilistic treatment of beliefs, the effects of utility and social pragmatics, as well as dual process theories of higher mental processing. Since early on, David’s landmark contribution has helped inspire and shape the development of the new paradigm. David was one of the first (Manktelow & Over, 1987, 1991) to point out the decision-theoretic nature of reasoning, especially in deontic contexts. In his work with Jonathan Evans, David developed an influential dual conception of rationality, linked to a dual processing account of thinking and decision making (Evans & Over, 1996). Their theory of the suppositional conditional (Evans & Over, 2004; Over et al., 2007) provided another landmark in the history of the new paradigm. In recent years David continues to lead and support work in the frontline of the new paradigm, focusing on such diverse topics as many-valued probabilistic truth tables (Baratgin, Over, & Politzer, 2013); iterated conditionals (van Wijnbergen-Huitink, Elqayam, & Over, 2014); scope ambiguities (Over, Douven, & Verbrugge, 2013); belief updating in induction (Hadjichristidis, Sloman, & Over, 2014); probabilistic conditional inference (Singmann, Klauer, & Over, 2014); and the difference between probabilistic deduction and induction (Evans & Over, 2013). In this symposium we explore the mainstream as well as the boundaries of the new paradigm psychology of reasoning, in celebration of David’s 70th birthday, which will occur in August 2016. Topics to be explored include probabilistic reasoning, belief updating, dual processing, and utilitarian and pragmatic effects in reasoning – the very topics which are at the heart of the new paradigm and which David helped establish.

Speakers and titles:

Session 1: David Over: The psychology of reasoning from ICT1988 to ICT2016: The development of the new paradigm

Mike Oaksford: Causal Bayes nets and the effects of response format on causal conditional reasoning

Nick Chater: How can rationality possibly increase?

Nicole Cruz: Deduction from uncertain premises

Henrik Singmann, Igor Douven, Shira Elqayam, David Over, & Janneke van Wijnbergen-Huitink: Conditionals and Inferential Connections: A Hypothetical Inferential Theory

Session 2:

Valerie Thompson, Dries Trippas & Simon Handley: When fast logic meets slow beliefs

Jean-Francois Bonnefon: Reasoning about preferences

Frank Jamet Jean Baratgin, & Guy Politzer: Children’s interpretation of conditional requests

Masasi Hattori & Ikuko Hattori: Dual frames in causal reasoning and other types of thinking: Speed, accuracy, and frames in reasoning

Steven Sloman: Discussion. 

08:30-10:00 Session 2C: Symposium
Location: Watson CIT 165
Balancing the scales: Mechanisms producing and sustaining fairness

ABSTRACT. The ability to understand fairness and act fairly towards others is critical to our existence as a social species. To this end, nearly all human cultures develop some mode of fairness; even human infants are sensitive to “fair” vs. “unfair” distributions; and concepts of merit, equity, and fairness are ubiquitous in human societies. Yet, recent work has uncovered a rather slow emergence of fairness during the lifespan – while children recognize fairness early in life, they do not always behave or act fairly until middle childhood. What explains the emergence of fairness, and on which cognitive, social, cultural, evolutionary, and situational factors does it depend? How do children learn and acquire the distinct modes of fairness (e.g., equality, merit-based fairness) relevant to their cultural groups? In our symposium, we attempt to answer these questions by presenting a series of talks exploring the diverse underlying mechanisms of fairness.

In particular, we demonstrate five distinct and novel mechanisms that contribute to the emergence of fairness. Paper 1 begins by providing evidence for deep evolutionary roots of fairness and demonstratives punitive inequity aversion in a distant primate relative. Paper 2 shows that fairness relies on a critical cognitive achievement: numerical cognition. In particular, this paper shows that the acquisition of cardinality explains age-related changes in children’s ability to share equally as well as the resource distribution strategies that children use. Paper 3 finds that in a culture quite distinct from ours with relatively less formal schooling, numerical cognition predicts merit-based equality; Together, Papers 2 and 3 suggest that children’s cognitive abilities (numerical cognition) predicts children’s ability to acquire the mode of fairness most relevant to one’s cultural group. Paper 4 then focuses on how direct testimony about norms and consequences shapes children’s equality-based and merit-based distributions, thus indicating a crucial role of social input. Finally, Paper 5 focuses on the role of contextual and situational factors by showing how the prior generosity of others precipitates fair treatment of new individuals. This symposium spans multiple species (monkeys and humans), covers a large range of ages (2.5-7), addresses a wide array of mechanisms both endogenous and exogenous to the child (evolutionary, cognitive, cultural, social, and experiential), and encompasses diverse forms of fairness (equality, merit, and reciprocation). Together, these talks paint a broad picture of how different mechanisms may interact to produce and sustain fairness concepts and behaviors during early development.

Paper 1:

Title: The Origins of Spite? Capuchin monkeys (Cebus apella) punish conspecifics who have access to a monopolizable resource

Authors: Kristin L. Leimgruber, Alexandra G. Rosati, & Laurie R. Santos

Paper 2:

Title: Numerical-cognition explains age-related changes in fair sharing

Authors: Nadia Chernyak, Beth Sandham, Paul L. Harris, Sara Cordes

Paper 3:

Title: Native Amazonian children forego egalitarianism in merit based tasks when they learn to count.

Authors: Julian Jara-Ettinger, Ted Gibson, Celeste Kidd, Steve Piantadosi

Paper 4:

Title: The influence of storybooks and direct testimony on children’s beliefs about distributive justice

Authors: Joshua Rottman, Liane Young, & Deborah Kelemen

Paper 5:

Title: Children’s fairness in social exchange and the value of information

Authors: Samuel Ronfard, Laura J. Nelson, Yarrow Dunham, & Peter Blake

08:30-10:00 Session 2D: Symposium
Location: Barus and Holley 159
Diagnostic reasoning with causal models

ABSTRACT. Organizers: Agnes Scholz (University of Zurich) & Felix G. Rebitschek (Max Planck Institute for Human Development Berlin)

Diagnostic reasoning from effect(s) to cause(s) is a central topic in cognitive science and an important issue in applied contexts, such as medical diagnosis. The goal of this symposium is to highlight current key issues in research on diagnostic causal reasoning. One challenge in diagnostic reasoning with multiple effects (e.g., symptoms) is that the presentation order may affect diagnostic judgments. The underlying cognitive processes can be traced non-intrusively by tracking a diagnostic reasoner’s eye movements. Utilizing the looking-at-nothing paradigm, eye-tracking data reveals the retrieval of observations and candidate hypotheses from memory. By studying sequences in which more than one hypothesis was equally supported by the observed symptoms, gaze data reflected the subjective status of hypotheses held in memory and revealed instances of hypothesis change and biases in symptom processing (Scholz: Eye movements reveal symptom processing during diagnostic reasoning). Extending this method to a second experiment, the activating retrieval of diagnostic hypotheses was also demonstrated in a visual reasoning task (Klichowicz: Current explanations in memory: An eye-tracking study). An implication of this research for clinical reasoning is that the sequential integration of evidence can bias diagnostic judgments towards hypotheses supported by initially observed evidence. An intervention for preventing potential diagnostic errors due to such order effects was tested in a clinical setting. Providing hypotheses externally during reasoning reduced diagnostic errors and might improve diagnostic practice in the future (Kostopoulou: Decision support to influence physicians’ first diagnostic impressions). Another key issue concerns the interplay between causal structure learning and diagnostic reasoning. The structure induction model of diagnostic inference takes into account alternative causal models that may underlie the data (Meder: Structure induction in diagnostic causal reasoning). Normatively, consideration of alternative generating structures can lead to very different inference patterns than a purely statistical approach. Empirically, subjects’ diagnostic inferences are sensitive to alternative structures when reasoning from effect to cause. Consideration of causal information was examined further in the context of diversity effects, according to which more distantly located pieces of evidence in given causal structures enable stronger diagnostic inferences (Rebitschek: Rational diversity effects). A formal analysis using causal Bayes nets reveals rational diversity effects across different structures, base rates, and causal strengths. These findings show that empirically-observed diversity effects have a normative basis and provide pathways for investigating alternative inference strategies that can give rise to diversity effects in human diagnostic reasoning.

10:00-10:20Coffee Break
10:20-11:50 Session 3A: Symposium
Location: Macmillan 117
The practical, theoretical, and ethical issues behind Nudge
SPEAKER: Magda Osman

ABSTRACT. Behaviorally informed policy making has received substantial interest in recent years, sparked by Thaler and Sunstein’s (2008) Nudge book and subsequent research. The idea of the book is to explicate the ways in which policy makers, economists, and scientists (choice architects) characterize decision-making problems (choice architecture), in order to find simple solutions to steer behavior in a particular direction. The goal is to help people making better decisions at an individual level and/or at a population level, without imposing bans or changing the incentive structure of the choice architecture.

There is a now a rather intense research initiative to examine how successful nudges are as a means of aligning motivations that ought to be shared by a population (e.g., live healthily, act in financially prudent ways, perform more civic duties) with actual behaviors (e.g., smoke less, save more, donate one’s organs); in other word, establishing the extent to which nudges lead to effective behavioral change. There is no denying that providing a solid evidence-based to establish the reliability, sustainability, and generalizability of nudges is essential, because they have a reach into virtually all aspects of peoples’ lives. However, there still seems to be significant gaps in knowledge about nudges that requires researchers to ground the amassing evidence-base in theory, which is still lacking.

For instance, imagine a policy maker is faced with a practical question such as: “What environment should the choice architect take into account when designing defaults?” This cannot be answered without an understanding of the underlying behavior on which defaults operate, as well as an understanding of the relationship between the choice environment and behavior, and an understanding of the mechanism by which defaults (or any other type of nudge) change mental and physical behaviors (in the short and long term). Each of these in turn raises ethical considerations. A coherent theory of nudge still requires a well-developed set of responses to ethical issues regarding what behaviors ought to be changed, and the ethical issues regarding the psychological means of changing behaviors.

The aim of this symposium, which is comprised of expert panelists from Behavioral Economics, Philosophy, and Psychology, is to discuss work that directly speaks to these issues, and to present ways of advancing our understanding of Nudge.

10:20-11:50 Session 3B: Symposium
Location: Friedman Auditorium
New paradigm, probabilities, pragmatics, and dual processing: Festschrift in honour of David Over’s 70th birthday
SPEAKER: Shira Elqayam

ABSTRACT. David Over’s contribution to the study of the psychology of reasoning spans more than a quarter of a century and a significant number of game-changing works, many of which concern the new paradigm in psychology of human reasoning. David’s signature mix of philosophical logic and experimental psychology has inspired generations of researchers, psychologists and philosophers alike. The new paradigm psychology of reasoning (as David dubbed it; Over, 2009), reflects a Kuhnian paradigm shift in our understanding of reasoning. Where the traditional paradigm focused on binary truth values as conceptualised in classical logic, the new paradigm sees reasoning as a type of decision making, involving probabilistic treatment of beliefs, the effects of utility and social pragmatics, as well as dual process theories of higher mental processing. Since early on, David’s landmark contribution has helped inspire and shape the development of the new paradigm. David was one of the first (Manktelow & Over, 1987, 1991) to point out the decision-theoretic nature of reasoning, especially in deontic contexts. In his work with Jonathan Evans, David developed an influential dual conception of rationality, linked to a dual processing account of thinking and decision making (Evans & Over, 1996). Their theory of the suppositional conditional (Evans & Over, 2004; Over et al., 2007) provided another landmark in the history of the new paradigm. In recent years David continues to lead and support work in the frontline of the new paradigm, focusing on such diverse topics as many-valued probabilistic truth tables (Baratgin, Over, & Politzer, 2013); iterated conditionals (van Wijnbergen-Huitink, Elqayam, & Over, 2014); scope ambiguities (Over, Douven, & Verbrugge, 2013); belief updating in induction (Hadjichristidis, Sloman, & Over, 2014); probabilistic conditional inference (Singmann, Klauer, & Over, 2014); and the difference between probabilistic deduction and induction (Evans & Over, 2013). In this symposium we explore the mainstream as well as the boundaries of the new paradigm psychology of reasoning, in celebration of David’s 70th birthday, which will occur in August 2016. Topics to be explored include probabilistic reasoning, belief updating, dual processing, and utilitarian and pragmatic effects in reasoning – the very topics which are at the heart of the new paradigm and which David helped establish.

Speakers and titles:

Session 1: David Over: The psychology of reasoning from ICT1988 to ICT2016: The development of the new paradigm

Mike Oaksford: Causal Bayes nets and the effects of response format on causal conditional reasoning

Nick Chater: How can rationality possibly increase?

Nicole Cruz: Deduction from uncertain premises

Henrik Singmann, Igor Douven, Shira Elqayam, David Over, & Janneke van Wijnbergen-Huitink: Conditionals and Inferential Connections: A Hypothetical Inferential Theory

Session 2:

Valerie Thompson, Dries Trippas & Simon Handley: When fast logic meets slow beliefs

Jean-Francois Bonnefon: Reasoning about preferences

Frank Jamet Jean Baratgin, & Guy Politzer: Children’s interpretation of conditional requests

Masasi Hattori & Ikuko Hattori: Dual frames in causal reasoning and other types of thinking: Speed, accuracy, and frames in reasoning

Steven Sloman: Discussion. 

10:20-11:50 Session 3C: Symposium
Location: Watson CIT 165
Models of Causal Reasoning

ABSTRACT. Causal reasoning is a central cognitive competency, enabling us to adapt to our world. Yet causal reasoning has been curiously absent from mainstream cognitive psychology until recently. The situation has slowly changed in the past two decades leading to a number of competing theories of causal cognition. The goal of the symposium is to present recent research demonstrating the progress in this research area:

1. Michael Waldmann: Hybrid causal representations

The goal of this talk is to defend a hybrid representation account of causal reasoning. Despite the beauty of a parsimonious unitary account, there is little reason to assume that people are restricted to one type of representation of causal scenarios. In several empirical case studies, it will be demonstrated how dependency, dispositional, and process representations mutually interact in generating complex representations driving causal inferences.

2. Joshua Tenenbaum: Causal reasoning in intuitive theories: Simulating actual and counterfactual worlds using probabilistic programs 

Probabilistic programs provide a substrate for explaining how reasoners simulate possible outcomes, including actual and counterfactual possibilities, and estimate probability distributions to assess counterfactual responsibility.  The talk will present experiments on how people make judgments about causal responsibility in core domains, such as intuitive physics and intuitive psychology.

3. Bob Rehder: Beyond Markov: The Beta-Q model of causal reasoning

A defining property of causal graphical models—the independence relations entailed by the Markov condition—is routinely violated by reasoners. Three accounts of Markov violations are tested. Subjects’ inferences were consistent with a model that stipulates that humans interpret causal networks as implying non-normative patterns of co-occurrence among variables.

4. Patricia Cheng: Why and how causal invariance as an aspiration shapes our causal representation of the world  

The talk will explain and illustrate the three roles of causal invariance — aspiration, criterion for hypothesis revision, and default assumption — in intuitive and scientific reasoning, with aspiration driving the other two.  These roles are essential to constructing a stable causal representation that is generalizable from the learning to an application context.

5. Phillip Wolff: When causal perception defies reason   People can hold conflicting judgments about the existence of a causal relationship. Such situations suggest that the identification of causation must involve more than one process, an automatic intuitive one and a more logical, deliberate one. The talk will present experiments that point to the emergence of these two kinds of processes over the time-course of identifying causation.

10:20-11:50 Session 3D: Talks
Location: Barus and Holley 159
Real-World Correlates of Performance on Heuristics and Biases Tasks in a Community Sample
SPEAKER: Maggie Toplak

ABSTRACT. The systematic errors that people make in choosing actions and in estimating probabilities have been amply demonstrated in the heuristics and biases literature. Nevertheless, this literature is often criticized for its emphasis on laboratory tasks and some have questioned whether the tasks relate to real-world behavior. In the current study, we examined whether performance on several heuristics and biases tasks (as well as some important thinking dispositions) were associated with real-life outcomes in a community sample of adults. We examined performance on five heuristics and biases tasks (ratio bias, belief bias in syllogistic reasoning, cognitive reflection, probabilistic and statistical reasoning, and rational temporal discounting), three thinking dispositions (actively open-minded thinking, future orientation, and superstitious thinking), and a questionnaire assessing real-world correlates in several domains (substance use, driving behavior, financial behavior, gambling behavior, electronic media use, and secure computing). Our tasks were modestly associated with real-world outcomes. That is, better performance on the heuristics and biases measures was associated with fewer negative outcomes. We found that the associations were generally higher in males (who displayed more risk behavior) than in females. Heuristics and biases performance and thinking dispositions were unique predictors of real-world outcomes after statistically controlling for educational attainment and sex differences.

Cross-scale numerical anchoring
SPEAKER: Adam Harris

ABSTRACT. Anchoring effects are robust, varied and can be consequential. Researchers have provided a variety of alternative explanations for these effects. More recently, it has become apparent that anchoring effects might be produced by a variety of different processes, either acting simultaneously, or else individually in distinct situations. An unresolved issue is the degree to which anchoring can transcend scales (i.e., is it necessary that the anchor value and the target judgment are expressed in the same scale units?) and the necessary pre-conditions for this to occur. Despite some theoretical predictions to the contrary, cross-scale anchoring is observed in three experiments. Such effects are important for the direction of future theorising on the causes of anchoring effects.

Exposure to Random Anchors Improves Judgments
SPEAKER: Jason Dana

ABSTRACT. Since random anchors are, by definition, unrelated to the judgments they are shown to affect, it is widely assumed that such anchors could only reduce judgmental accuracy. However, we find that respondents exposed to random anchors are actually more accurate than an unanchored control group. Moreover, theoretical analyses demonstrate that accuracy within the anchored group could be improved still further if respondents gave the random anchors even more weight. Accordingly, so-called “debiasing” procedures (e.g., highlighting the randomness of the anchor) can often backfire.

Most people are normative some of the time: Mixtures of combination rules are used in estimates of conjunctions and disjunctions
SPEAKER: James Tripp

ABSTRACT. Human estimates of the probabilities of combinations of events show well-established violations of probability theory, most notably the conjunction and disjunction fallacies. These violations have led researchers to conclude that the rules of probability are too complex for most people to use, and that cognitively easier approximations such as averaging are used instead. However, previous work has either looked at data averaged over participants or has assumed that individuals use only a single combination rule. We collected repeated estimates of conjunctions and disjunctions and investigated whether individuals consistently used a single rule or used a repertoire of rules using a trial-by-trial Bayesian analysis. We found that most participants were best described as randomly selecting a combination rule on each trial, and that a large majority of participants use the correct rule at least some of the time.

Judging forecasting accuracy: How human intuitions meet theoretical models
SPEAKER: Katya Tentori

ABSTRACT. Most of the scoring rules that have been discussed and defended in the literature are not ordinally equivalent, with the consequence that, after the very same outcome has materialized, a forecast X can be evaluated as more accurate than Y according to one model but less accurate according to another. A question that naturally arises is therefore which of these models better captures people’s intuitive assessment of forecasting accuracy. To answer this question, we developed a new experimental paradigm for eliciting ordinal judgments of accuracy concerning pairs of forecasts for which various combinations of associations/dissociations between the Quadratic, Logarithmic, and Spherical scoring rules are obtained. We found that, overall, the Logarithmic model is the best predictor of people’s accuracy judgments, but also that there are cases in which these judgments – although they are normatively sound – systematically depart from what is expected by all the models. These results represent an empirical evaluation of the descriptive adequacy of the three most popular scoring rules and offer insights for the development of new formal models that might favour a more natural elicitation of truthful and informative beliefs from human forecasters.

11:50-13:00 Session 4: Keynote - Location: Macmillan 117
People Prefer Educative Nudges (Kind of)
SPEAKER: Cass Sunstein

ABSTRACT. In the United States, the United Kingdom, Australia, and many other nations, those involved in law and policy have been exploring choice-preserving approaches, or “nudges,” informed by behavioral science and with the purpose of promoting important public policy goals, such as improved health and safety. But there is a large and insufficiently explored difference between noneducative nudges, which target or benefit from automatic processing, and educative nudges, which target or benefit from deliberative processing. Graphic warnings and default rules are noneducative nudges; statistical information and factual disclosures are educative nudges. On philosophical grounds, it might seem tempting to prefer educative nudges, on the assumption that they show greater respect for individual dignity and promote individual agency. A nationally representative survey in the United States finds evidence that in important contexts, majorities do indeed prefer educative nudges. At the same time, that preference is not fixed and firm. If people are asked to assume that the noneducative nudge is significantly more effective, then large numbers of them will move in its direction. In a range of contexts, Republicans, Democrats, and independents show surprisingly similar responses. The survey findings, and an accompanying normative analysis, offer lessons for those involved in law and policy who are choosing between noneducative nudges and educative nudges.

13:00-14:00Lunch Break - Kasper Multipurpose Room
13:00-14:00 Session 5: Poster Session
Does ambiguity drive the coevolution of language and thinking?

ABSTRACT. The evolution of thought and language are closely intertwined. Yet, the exact relation between them remains unclear. Does thought determine language or does language limit what can be thought? Philosophers like Rene Descartes suggested that language is the outward expression of (rational) thought. Linguist Noam Chomsky incorporated this Cartesian idea into his linguistic theories. According to him language is an innate capacity that allows us to engage in inner dialogue, to express or to clarify our thoughts. Spoken language (externalization) and communication with others are secondary. Chomsky’s proposals have been challenged for many reasons. I will focus here on the role of ambiguity in language. If the main function of language were the expression of one’s own thought one should not expect to find any ambiguity in language. The speaker already knows what she wants to express before putting her thoughts into language. Yet, language is full of ambiguity. This fact will be explained in the context of the communicative function of language that arose embedded in cultural contexts. Both the use of ambiguity and the ability to disambiguate convey selective advantages that will be explored.

Semantic conflict helps to reject distracting information in the web feed search task
SPEAKER: Tomasz Smolen

ABSTRACT. Our study tested a hypothesis, based on the cognitive dissonance theory, that not only stimulus and response conflicts, as studied to date on the grounds of the influential conflict monitoring theory of human self-control, but also semantic conflicts between cognitive representations help us to cope with interference and distraction during problem solving. We applied a realistic task (simulated web feed), which required searching of information, needed to solve a given problem, among task-irrelevant distractors. We manipulated the amount of semantic conflict within the task-relevant information (via contradictory text messages), as well as we varied the amount of to-be-ignored but tempting distraction (jokes and erotic pictures). Experiment demonstrated that semantic conflict helped to ignore distraction, and to focus on the task. Specifically, the substantial effect of increased distraction (i.e., a worse detection of task-relevant messages) virtually disappeared when the conflict was present. The results validate and extend the conflict monitoring theory, by implying that our minds can detect semantic conflicts, and use them to focus on the task. Semantic conflicts seem to act as a red alarm, signaling to our minds that an increased control over interference and distraction is necessary.

One factor explains paranormal, pseudoscientific, conspiracist, dualistic, and religious beliefs

ABSTRACT. Different kinds of beliefs, like paranormal, pseudoscientific, and conspiracist beliefs are epistemically questionable: They all assume existence of supernatural phenomena at odds with scientific knowledge. On the other hand, religions also pertain to supernatural powers, but they reject paranormal beliefs, and accept most of the claims of modern science. For example, for Catholics, whom we studied, magical rituals are heresy, and the pro-science encyclicals are dogmatic. Surprisingly, some studies reported moderate positive correlations between selected paranormal/pseudoscience beliefs and religious beliefs. However, no study examined them comprehensively. Using confirmatory factor analysis, we sampled 285 Polish participants on a wide range of pseudoscientific, paranormal and conspiracist beliefs versus dualistic and religious beliefs. The model assuming two separate factors showed extremely strong factor correlation of .93. In consequence, the one-factor model fitted data perfectly. It suggested one general human mindset for epistemically questionable beliefs, which, among all, determines the strength of religious beliefs. Moreover, we examined the relationship of this mindset factor with the reasoning ability factor, as measured by three fluid intelligence tests: matrix reasoning, figural analogies, and paper folding, and found that more intelligent people were less prone to hold epistemically questionable beliefs (r = -.45).

Reasoning ability strongly depends on the cross-frequency coupling of neuronal oscillations

ABSTRACT. Several existing computational models predict that the individual reasoning ability strongly depends on the cross-frequency coupling of neuronal oscillations. However, empirical evidence supporting these predictions is still scarce. Using EEG data from 21 participants solving a computerized variant of the Raven test, which requires inductive reasoning on matrix problems, we found that the coupling between the theta (4-8 Hz) and gamma frequency bands (30-80 Hz) predicted the test’s total score. Specifically, using a novel clustering method, for each individual we identified the characteristic theta and gamma frequency which yielded the strongest coupling (i.e., the strongest modulation of the gamma wave amplitude by the theta wave amplitude). This coupling indicates a long-range coordination of neuronal processing, probably consisting of the binding of several representations (e.g., items on a list, or attributes of a relation) into one coherent mental structure. We found that the gamma-to-theta frequency ratio correlated substantially and positively (r = .60) with reasoning performance. This finding suggests that better reasoners can bind more items/attributes together, and thus can process more complex problems/relations. Surprisingly, the very coupling strength was negatively correlated with reasoning, meaning that the binding of more complex relations was less stable than the binding of simpler relations.

Thinking and Decision Making Regulation

ABSTRACT. In the 80’s at Moscow State University the theory of thinking regulation was developed. This theory assumed that every thought is deeply rooted in the thinker’s personality attributes. Empirical studies conducted by Oleg Tikhomirov and his students showed that emotional activation and type of motivation affected the process of thinking in terms of internal goals formation during the problem solving. Lately in 90’s the same theory was applied to the process of decision making. Studies conducted by Tatiana Kornilova revealed the complexity and multifactorial regulation of a person’s choice. The main idea of these studies was about thinking as the leading process that defined other levels of the personality regulation in decision making. Nowadays we argue that cognitive processes should be considered as the central core in this regulation. We are facing some difficulties in the discussion the problem of personal choice with our colleagues, who represent personality approach in studies of decision making. So we are concerned that thinking tends to be excluded from the internal mechanism of a choice. Thus our talk will be devoted to the description of Tikhomirov’s theory of thinking regulation, Kornilova’s approach to decision making and recent empirical evidences confirming these ideas.

Are there limits to motivated reasoning about death? Effects of moral valence on judgments of death and causation in an organ procurement scenario

ABSTRACT. We investigated effects of moral valence (good vs. bad) and consciousness (conscious vs. not conscious) on judgments of death and causation in a hypothetical organ procurement scenario. Participants (n = 434) were randomized to read one of four vignettes in which the moral context was framed as good or bad, and the prospective donor was conscious or unconscious. Participants also evaluated the removal of organs on a scale from morally wrong to morally right. As predicted, experimental manipulation of moral valence influenced judgments of death and causation, whereby participants in the good condition were more likely to judge the donor as dead, and organ removal as not causing death. Furthermore, participants’ judgments of morality significantly predicted judgments of death and causation, in the same direction as the experimental manipulation. Contrary to prediction, moral evaluations influenced judgments of death and causation even when the patient was conscious, and indeed, effects were stronger when the patient was conscious. Assuming the framework of motivated reasoning in which conclusions about death and causation are drawn that validate moral evaluations of organ transplantation, it appears that motivated reasoning gets stronger when facts (a conscious donor) challenge favored conclusions (organ removal did not kill the donor).

Cultural difference in a relationship between cognitive style and weird beliefs

ABSTRACT. Recent studies on superstition, such as beliefs in paranormal phenomena or conspiracy theory, have suggested that analytical and reflective people were less susceptible to such weird beliefs. The majority of participants in these studies were residents of USA, Canada and European (i.e. Western) countries in general, though a recent survey found a different pattern of relationship between cognitive style and such beliefs among Easterners. The present study investigated whether the association of analytic and intuitive cognitive styles with paranormal belief was different between Western and Eastern culture. Furthermore, this study also tried to extend these findings to beliefs in pseudoscientific, but not paranormal claims. Participants were presented with a series of paranormal and pseudoscientific statements and asked to show their agreements on the statements. Various measures of cognitive abilities and dispositions were also administered. Results showed that weird beliefs could be predicted by analytic cognitive style particularly for Westerners. However, for Eastern participants, analytic style had a limited impact on beliefs. In addition, intuitive cognitive style was a common predictor of beliefs between two cultures. The present findings raise an important question on a relationship between cognitive style and maintaining weird beliefs.

When maximizing good makes people look bad: Reputational concerns in effective giving

ABSTRACT. People do not usually compare the effectiveness of different charities when deciding which charitable cause to donate to. This raises an intriguing dilemma: do people care about the effectiveness of charities, but are not able to evaluate which charities are more effective, or is effectiveness not an important factor for people when deciding to give? Allocating money to effective charities is crucial given the huge difference in effectiveness between typical and top charities. However, studies have shown that subjects fail to allocate money to the most cost-effective organizations, despite having clear information about their performance. In the present project we examined whether reputational concerns related to effective giving can explain this finding. In experiment 1a, we explored people's judgments regarding effective giving and the character evaluation of decision makers. In experiment 1b, we examined people’s judgments about particular rationales for donating to different charities. In experiment 2, we studied whether reputational concerns deter people from making more effective decisions by manipulating the observability of the decisions. The present studies have important implications both for their theoretical contribution –what factors lead people to donate? - and their practical implications –how can we nudge people's charitable decisions in the real world?

A cultural difference on causal reasoning by a causal induction paradigm: Japanese data
SPEAKER: Yoshiko Arai

ABSTRACT. Morris & Peng (1994) reported that Westerners focus on dispositional factors while Easterners focus on situational factors when doing causal attribution to explain a social event. Are their finding on cultural differences replicated using the paradigm of causal induction? We examined first whether Easterners focus on situational factors. Japanese participants were asked to make causal judgments as to imaginary cases of prisoners’ crimes (murder or not murder) and their motives (situational or dispositional), each of which was presented one by one. The contingency between crimes and motives was manipulated. As a result Japanese participants showed a greater degree of causal relationship on dispositional factors than situational factors; they focused on not situational factors but dispositional factors. This finding contrasts with those of Morris & Peng (1994). Some reasons can be discussed not only a methodological difference but also participants’ nationality and globalization. We expect to see whether Westerners focus on dispositional factors using the paradigm of causal induction.

Will social problem solving skills affect performance in a consensus game?
SPEAKER: Naohiro Obata

ABSTRACT. The consensus game is one of communication game. Its purpose is to solve some problems by group discussion. In this study, we focused on the social problem solving skills of group member, and we investigated how they affected performance the consensus game. In this experiment, we used the consensus game called “If you have distress in the desert?” Its task is to list 12 items in order of importance for survival. 198 students participated in this experiment. They were divided into groups of 4-6 people. They were told to list the items alone, and then they were told to list them by discussion in the group. Then, they were asked to answer Social Problem Solving Inventory–Revised (SPSI-R). Participants were divided into 2 types (positive & negative) by the score of SPSI-R, and they were divided into 3 groups (positive, average, and negative) in accordance with the ratio of members. In the analysis of game score, there was no difference among 3 groups. But in terms of the rate of increase of game score, positive groups was higher than negative groups. These results indicated that positive attitude of problem solving had a good influence on performance in a consensus game.

Which is preferred syllogism or enthymeme? A cross-cultural study.
SPEAKER: Hiroshi Yama

ABSTRACT. In order to test the hypothesis that Easterners have a high-context culture, whereas Westerners have a low-context culture, we conducted a cross-cultural study, in which participants from Japan, Korea, Taiwan, France, and Great Britain were asked to decide which argument syllogism or enthymeme (without major premise such as “Socrates is human, therefore Socrates is mortal”) fit to each of six dimensions on 7-point scale. Six dimensions were persuading, logical, redundant, natural poetic, and wise. One more independent variable was whether major premise is known or unknown. Our prediction is that people in a high context culture are more likely to accept enthymeme as persuading, logical, natural, and wise, because omitted major premise is recoverable by context in a high-context culture. We also predict that this tendency is stronger when a major premise is known, because knowledge is embedded in context. However, the results did not support our hypothesis as for the measures of persuading, logical, and natural. It was only in the measure of wise that our hypothesis was supported. Easterners were more declined to judge enthymeme as wise when its major premise is known. The cultural difference between Easterners and Westerners is in value of argument style.

Violating Asimov’s First Law of Robotics: Moral reasoning and human-computer interaction

ABSTRACT. Recently, there has been a debate about how an artificial agent should be programmed to act in situations involving a moral dilemma. For example, in case of brake problems, an autonomous car should be allowed to change direction in order to minimize human deaths? Such kind of dilemma has been extensively explored by psychologists employing scenarios that ask for a direct or an indirect action. Typically, people adopt a utilitarian criterion in the former (killing indirectly one person to save five people) and a deontic criterion in the latter (avoiding to kill directly one person to save five). In two experiments, this study explored what people expect in these tasks from a humanoid robot, a computerized system or a human agent. Expectations about human agents’ choices were in line with previous literature, whereas participants predicted that an artificial agent adopts a utilitarian criterion in both scenarios. This expectation was modulated by the typology of the artificial agent such that the utilitarian criterion was stronger for humanoid robots compared to computerized systems. Since users’ expectations are a key element in optimizing human-computer interactions, results provide clues for the debate about programming the ethical behavior of an autonomous agent.

Reasoning in Cognitive Science: A Bibliometric Analysis

ABSTRACT. A sort of “transition in thought” (Adler, 2008, 1), reasoning can be defined as the mental process by which information is used and processed in order to reach novel conclusions (Johnson-Laird, 2008). Being a most typical information-processing phenomenon, reasoning has been both a model and a key theme of investigation for the Cognitive Sciences since their emergence in the 1950s. Given the profoundly interdisciplinary nature of the study of cognition (see Figure 1), the aim of the present project is to assess the interdisciplinarity overlap between cognition and reasoning, by carrying out a citation analysis of the reasoning-related articles contained in the Web of Science database and thematically categorized on the basis of the National Science Foundation Journal Classification (http://goo.gl/vjNhPF). Preliminary results show that the sciences of reasoning are not only strongly intradisciplinary, but also less cognitive than what might be expected, as reasoning research often cite articles published in Journals associated with specialities lying outside the cognitive domain. This research, while contributing to a better understanding of the evolution of the cognitive research on reasoning, will also pave the way for a larger, full-scale citation analysis of the whole field of Cognitive Science.

Poster Session
14:00-15:30 Session 6A: Symposium
Location: Macmillan 117
Holding others responsible

ABSTRACT. How do we hold others responsible for the consequences of their actions? Attributing responsibility is a complex process that involves analyzing the causal role that a person's action played in bringing about the outcome, as well as inferring what the action reveals about the person. In this symposium, we will illuminate this process from different perspectives. 

Joseph Halpern, Computer Science (Cornell University), will discuss how normative expectations shape the way in which people hold others responsible. By combining a formal model of causation with a representation of normality, Halpern explains a variety of effects such as how responsibility diffuses between multiple causes, and how more distant causes are often seen as less responsible than more proximate ones. 

Cushman will present a formal model that attempts to capture the notion of “good interventions” that figures prominently in contemporary theories of actual causation, with particular attention to its implications for social cognition.

Liane Young, Psychology (Boston College), will take a closer look at moral evaluations of harmful versus impure agents and actions (e.g., assault versus incest). In particular, Young will propose that harm versus purity judgments are associated with distinct behavioral and neural patterns reflecting differential reliance on mental state attribution as well as situation-based versus person-based reasoning. 

David Pizarro, Psychology (Cornell University), will take a wider perspective and discuss the role that person inferences play for moral judgments. While much of the work in moral psychology has focused on aspects of the action itself, or the consequences that resulted from the persons' action, Pizarro advocates a person-centered approach to moral judgment according to which people are primarily concerned with figuring out who is good and who is bad. 

Tobias Gerstenberg, Cognitive Science (MIT), will present a computational model that shows how responsibility attributions are sensitive to both the causal role that the person's action played, and the extent to which the action changed our expectations about the person's future behavior. In line with Pizarro's approach, the model predicts that whether positive outcomes that resulted from unexpected actions are seen as more or less praiseworthy, depends on what we infer about the person's character from their action. 

14:00-15:30 Session 6B: Symposium
Location: Friedman Auditorium
Quantum Probability

ABSTRACT. Social scientists are well acquainted with the axioms of Boolean logic (which include for instance commutativity). But there are other ways to think about events and probabilities. Just as Euclidean axioms have been relaxed to give insights into other spaces (like the Klein space), can we relax axioms of probability theory too? The theory of probability was altered upon the advent of quantum physics. In fact the axiomatics for this new theory were provided by von Neumann in the 1930’s. The essential change with the use of quantum probability is that events are now subspaces of a vector space. This new approach can accommodate several behavioural traits in decision making such as non-commutativity. But the issue of using quantum mechanics in decision making also appeals to the valid use of for instance wave-particle duality in the area of decision making (superposition: ambiguity; and collapse: decision is made). This special session will present the latest work in the area of quantum probability as it applies to issues which relate to decision making.

Andrei Khrennikov: A quantum(-like) model of common knowledge Acacio de Barros: Probabilities and Contextuality Arkady Plotnitsky: Why Quantum? What is Quantum?: The Principles of the Quantum and Quantum-like Theories Sandro Sozzo: Quantum Cognition Beyond Hilbert Space: Fundamentals and Applications Emmanuel Haven: The quantum-like paradigm and formalizing economic decision making

14:00-15:30 Session 6C: Symposium
Location: Watson CIT 165
Thinking about non-linear systems

ABSTRACT. Many natural systems change in a non-linear fashion, but people are notoriously poor at reasoning about non-linear systems. This symposium explores how cognitive systems represent non-linear environments and potential avenues for improving thinking in such environments.

In “Perception and prediction of non-linear changes” (Zhao), Participants observed dots in a tank increasing or decreasing following linear, exponential, or quadratic trends, and made predictions about future changes. Participants were accurate at perceiving increases, but severely under-predicted exponential growth in the future. When dots declined exponentially, people over-predicted the time taken to deplete the tank and the amount that would remain in the future.

“Cascade Neglect in Coupled Systems” (Oppenheimer & Elga) identifies and explores a novel failure of non-linear reasoning. When the likelihood of failure of each node in a system is conditionally dependent upon the failure of other nodes in the system, the system as a whole can shift rapidly from no failures to total failure. Participants do not recognize non-linearity in these sorts of systems, and therefore express preferences that lead to catastrophic failures.

“Small Samples and the Illusion of Linearity in Judgment” (Juslin) explores how working memory constraints underlie errors in non-linear reasoning: people easily learn linear relationships by explicit rule-based processes, but nonlinear functions invoke slowly learned “piecewise linear approximations” or memory for similar exemplars. The results from two experiments, where people learn from feedback to predict either a Y criterion variable from a known cue variable X, are consistent with these predictions, but also yield some unexpected findings.

“Cognitive Reflection and Non-Linear Reasoning Errors” (Thomson & Oppenheimer), explores individual differences in non-linear reasoning, and finds significant relationships between non-linear reasoning errors and several measures of numeracy. Moreover, non-linear reasoning errors are more prevalent among people who show low cognitive reflection (tendency towards System 2 thinking) including on measures unrelated to numeracy. This suggests that non-linear reasoning failures can, to some extent be overcome through more analytical thinking strategies.

“Pushing the limits of visualization: How to think about complex displays” (Elliott and Rensink) describes the development of visual displays can improve non-linear reasoning. The authors describe a methodology to explore the perception of Pearson correlation r in various kinds of visual displays; results suggest that this is based on the rapid perception of entropy. They then extend this approach to more complex displays, where more than one data population is simultaneously present.

14:00-15:30 Session 6D: Symposium
Location: Barus and Holley 159
The link between reasoning and mathematics: Evidence from cross-sectional, developmental and brain imaging studies

ABSTRACT. Starting with Piaget, several researchers have proposed that mathematical and reasoning skills are closely related. Thus, it is surprising that to date relatively few studies have investigated this link. In this symposium we present evidence for the close relationship between mathematical and deductive reasoning skills, using evidence from a wide range of research paradigms, and various age groups. Talk 1 (Asmuth, Morson & Rips) demonstrates that young children’s representation of the number sequence is more mature than it was previously thought. Using novel methodologies instead of the traditional number line task, the Authors show that five and six year-olds prefer linear to logarithmic or exponential presentations of numbers along a “number line”, and these preferences do not depend upon symmetry or aesthetic biases. These findings suggest that a linear representation of numbers develops very early, and young children hold accurate beliefs about the number system. Talk 2 (Attridge & Inglis) explores the evidence for the idea that studying mathematics is important, in part because it develops general thinking skills. One line of evidence shows that post-compulsory mathematics students outperform comparison students on abstract reasoning tasks, and in particular, conditional inferences. Additional, longitudinal studies suggested that this difference was developmental, with mathematical study decreasing the endorsement of invalid inferences. Nevertheless, these improvements did not extend to all forms of conditional inference. Talk 3 (Morsanyi & O’Mahony) explores further the link between various forms of deductive inference and math, and the cognitive mechanisms that mediate this link. In a series of studies the link between reasoning skills and math appeared to be specific to conditional inferences (especially invalid arguments) and transitive inferences. Nevertheless, the underlying cognitive mechanisms appeared to be different. Transitive inferences were linked to performance on the number line task, spatial thinking style and non-verbal reasoning ability, whereas conditional inferences were related to simple arithmetic skills and the fluency of sematic memory retrieval. Talk 4 (Prado, Schwartz, Epinat-Duclos, & Leone) investigates the neural mediators of the relationship between math and deductive reasoning skills in children. In line with previous studies, the Authors found a behavioral correlation between reasoning performance and math skills. They further found that activity in the intra-parietal sulcus and rostrolateral prefrontal cortex during deductive reasoning predicted math skill across participants. This relationship, however, appeared to be modulated by the type of deductive argument processed (linear-ordering versus set-inclusion). The talks will be followed by a Discussion (Markovits).

15:30-15:50Coffee Break
15:50-17:20 Session 7A: Symposium
Location: Macmillan 117
Mental Models and Reasoning
SPEAKER: Ruth Byrne

ABSTRACT. Symposium on ‘Mental Models and Reasoning’ Symposium convener and chair: Ruth Byrne

The symposium will discuss recent empirical and computational tests of the mental model theory of reasoning. The mental model theory provides an account of human reasoning based on the idea that people form iconic mental representations that correspond to possibilities and that the key cognitive processes in reasoning include a search for counterexamples to putative conclusions. The five talks in this symposium report tests of different aspects of this account. The first talk, to be presented by Sunny Khemlani, outlines mReasoner, a unified computational implementation of the mental model theory, that simulates the theory’s key tenets for deductive and probabilistic reasoning. In the second talk, Markus Knauff argues that the mental models that people construct are supramodal and correspond to spatial layouts, they are more abstract than pictorial images and he reports evidence that visual images can impede reasoning. Mental models are iconic but they can contain symbols, such as for negation, and in the third talk, Ruth Byrne reports experiments on counterfactual conditionals, that distinguish between mental models and embodied mental representations that do not contain symbols. Robert Mackiewicz describes experimental evidence to support the idea that numerical reasoning depends on kinematic mental simulations, in the fourth talk. The fifth talk, to be presented by Henry Markovits, examines the cognitive processes that underlie counterexample-based strategies and statistical strategies, to compare mental models and probabilistic models. The symposium provides a snapshot of some contemporary empirical and computational research on mental models in reasoning.

Speakers 1. Sangeet Khemlani and Phil Johnson-Laird Naval Research Laboratory, Washington DC, US and New York University, US ‘mReasoner: A unified computational theory of reasoning.’

2. Markus Knauff University of Giessen, Germany ‘Why visual imagery impedes reasoning.’

3. Ruth Byrne and Orlando Espino Trinity College Dublin, University of Dublin, Ireland and University of La Laguna, Tenerife, Spain ‘Counterfactual conditionals and embodied simulations.’

4. Robert Mackiewicz Warsaw School of Social Sciences and Humanities, Poland ‘The role of visual displays in kinematic mental simulations: a case of reasoning from numerical estimations.’

5. Henry Markovits, J. Brisson, and P-L de Chantal University of Montreal at Quebec, Canada ‘Logical reasoning versus information processing in the dual strategy model of reasoning.’

15:50-17:20 Session 7B: Symposium
Location: Friedman Auditorium
Modality throughout cognition

ABSTRACT. Researchers in a number of different fields have independently argued for the importance of providing a role for modality—that is, some way of representing alternative possibilities that could have happened, but actually did not (e.g., Kratzer, 2012; Lewis, 1973; Pearl, 2000). Across these cases, the key insight has been that people’s understanding of the things that occur is shaped in some central way by their understanding of a set of alternative possibilities. Moreover, theoretical work in these fields has emphasized that people do not treat all alternative possibilities equally. Instead, they regard certain possibilities as relevant, while treating others as irrelevant (Portner, 2009; Roese, 1997). Within this research, one consistent theme has been that norms (statistical, moral, conventional, etc.) influence how these alternative possibilities are represented.

This present symposium focuses on new empirical and theoretical approaches to the role of modality throughout human cognition. In particular, it seeks to advance the current discussion on modality in two interrelated ways. The first is by providing a unified picture of how modality functions across many different aspects of cognition, including causal cognition (Icard), natural language semantics (Kratzer), developmental and cross-cultural psychology (Kushnir & Chernyak), intuitions about freedom (Phillips), and explicit judgments of possibility (Shtulman).

The second is by taking a number of different approaches to better characterize the way that modal cognition functions. Icard will present work on a formal model of the impact of statistical and moral norms on causal judgments which makes use of the idea of sampling from a representation of possibilities. Phillips will present evidence that there are multiple systems that can be recruited for modal cognition and demonstrate that they are differentially impacted by normality. Shtulman presents work that emphasizes how both developmental and individual differences can help us understand the the cognition underlying modal representations. Similarly, Kushnir and Chernyak present cross-cultural and developmental research that illustrate which aspects of modal cognition vary and which are constant across development and cultures. And lastly, Kratzer discusses how the emerging work on the psychological representation of modality can interface with the work in formal semantics on the linguistic representation of modality. Taken together, these five talks showcase the exciting new developments in the emerging research on modal cognition and its relation to normality.

15:50-17:20 Session 7C: Talks
Location: Watson CIT 165
The Role of Conviction and Narrative in Decision-Making under Deep Uncertainty
SPEAKER: David Tuckett

ABSTRACT. In a dynamically uncertain and complex world, the current framework of judgement and decision-making research is an unrealistic and unsatisfactory basis for understanding how agents behave, particularly when decisions really matter. We propose Conviction Narrative Theory (CNT) as a new framework to understand decisions to act in contexts of deep uncertainty. In CNT agents are embodied, social and subjectively means-end rational. They are able to act because they draw on cognitive and affective resources to form preferred narratives of the outcomes of their planned actions. We call such narratives, which establish preference and enable action readiness, conviction narratives. They permit human actors: (1) to fit patterns to observations, reducing the prediction problem required to determine options; (2) to form alternative pictures of the future outcomes of plans and their subjective impact: (3) to become attached to a preferred narrative that creates a sufficient feeling of certainty and accuracy to enable and support action. The use of conviction narratives explains how an actor, in objectively uncertain conditions, becomes certain enough to act, despite the possibility of serious loss. We show that CNT integrates research findings over disparate domains including active inference, grounded cognition and simulation, embodiment, and attitude research, as well as research in anthropology and sociology. In CNT approach and avoidance emotions evoked in narratives interact advantageously with cognition to support action. We report that an algorithmic technique focused on measuring changes in the proportion of approach and avoidance emotion in text documents forecasts changes in economic activity and risk-taking.

Consistency and credibility in legal reasoning: A Bayesian network approach

ABSTRACT. Consistency and credibility are important for evaluating witness testimony. The story model posits that people construct causal models from evidence but does not explain the role of consistency and credibility in evaluating causal models. Formal approaches use Bayesian networks (BN) to represent evidence in legal contexts. Recent empirical work suggests people might also reason using qualitative causal networks. We present two studies wherein participants read a realistic trial transcript and judge guilt and witness credibility. Study 1 varied 1) consistency of key with the victim testimony, and 2) relevance of defendant’s prior conviction. Inconsistencies challenged beliefs victim credibility and guilt, but a prior conviction had no effect on judgments. Study 2 constructed a BN to represent key issues varied in Study 1. Individual parameter estimates were elicited for the corresponding BN conditional probability table to compute posterior predictions for guilt and credibility. The BN provided a good model for overall judgments of guilt and credibility and individual participants’ ratings, given that the BN only represented key issues relating the victim’s testimony. These results suggest people construct causal models of but also factor in witness credibility and reliability. The BN approach is a promising direction for future research in legal reasoning.

Understanding and Financial Decision Making

ABSTRACT. I’ll describe two projects that explore how consumers’ subjective and objective understanding affect their financial decisions. The first project explores the relation between sense of understanding and evaluations of risk. Consumers tend to use sense of understanding as a proxy for risk, even when the two are uncorrelated objectively. For instance, perceived investment risk is lower when consumers feel they understand what a company does. This can lead to perverse effects on risk judgments and portfolio construction. The second project explores consumers’ misunderstanding of the important principle of investment diversification. Almost everyone has faulty statistical intuitions about the benefit of diversification, but the errors people make differ depending on financial literacy. These misunderstandings also transfer to decision-making, leading to suboptimal decisions. Taken together, the projects highlight an understudied issue: the importance of understanding, both subjective and objective, to adaptive decision-making in the financial domain.

(Presenter = Phil Fernbach)

Sleeping Beauty goes to the lab: The psychology of self-locating evidence

ABSTRACT. The Sleeping Beauty Problem is a challenging puzzle in probabilistic reasoning, which has attracted enormous attention and continues to produce ongoing debate. The problem goes as follows: Suppose that some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking. When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads? The two candidate answers are 1/2 and 1/3, the proponents of which are known as halfers and thirders. The present study examines for the first time the descriptive adequacy of both halfers’ and thirders’ answers. Our results show that naïve reasoning does not simply fit either. In particular, they suggest that any psychologically adequate analysis of the Sleeping Beauty Problem should take account that the impact of self-locating information on probabilistic reasoning is systematically discounted.

Measuring intuition inhibition without math: Developing a verbal test of cognitive reflection ability

ABSTRACT. The Cognitive Reflection Test (CRT) measures intuition inhibition – the cognitive ability to resist compelling but incorrect intuitions – in three simple mathematical problems. It rapidly became popular for its impressive power to predict how well people reason and make decisions. Despite the popularity of the CRT, three issues complicate its interpretation and threaten its continued use. (1) The numerical nature of the CRT confounds reflection ability with mathematical ability. (2) The statistical and psychometric properties of the CRT are imperfect. (3) An increasing proportion of participants are already familiar with the CRT. We have overcome these issues by developing a novel measure of cognitive reflection using verbal problems with low familiarity and with good statistical and psychometric properties: CRT-Verbal. First, we selected suitable items with relatively low familiarity and optimal difficulty as identified in two different populations (Studies 1 and 2) and with high content validity as judged by an expert panel (Study 3). Second, we demonstrated a good internal consistency and a test-retest reliability (Study 4) as well as criterion and construct validity of the test in different populations (Studies 5-7). We discuss the implications of these findings for research in thinking and reasoning, decision-making and moral cognition.

15:50-17:20 Session 7D: Talks
Location: Barus and Holley 159
The Persistence of Anchoring Effects on Valuations
SPEAKER: Sangsuk Yoon

ABSTRACT. Anchoring effect not only influences people’s numeric estimates in general knowledge tasks, but also influences people’s willingness-to-pay (WTP) for common market goods even when the anchoring numbers have no bearing on their valuations (Ariely et al., 2003). Despite of its robustness in a short-term, its long-term influence on preference is still unclear. In two experimental studies, we investigated whether anchoring effects on valuations persist over time.

In Study 1, we manipulated the time between willingness-to-buy (WTB, anchor) and WTP in the classic anchoring procedure, inserting a one-week gap between the two parts. We found that anchoring effects persisted, regardless of the memory of the original anchor. The effect also persisted when the WTP elicitation was performed in the first week, and repeated after a week and longer (a follow-up survey after 2-5 months). In Study 2, we tested the results with longer time gaps (4-week or 8-week) to eliminate a possible selection bias. We replicated previous findings: participants showed weaker anchoring effect with longer gaps but the effect was still significant, regardless of memory of the original anchor. These findings show that the effect of anchors on preferences persists, can have lasting influence.

‘Unlikely Outcomes’ Might Never Occur, But What About ‘Unlikely (20%) or 20% (Unlikely)’ Outcomes?
SPEAKER: Sarah Jenkins

ABSTRACT. Recently, findings from the ‘which outcome’ methodology have cast further doubt on the effectiveness of risk communications which use verbal probability expressions (VPEs). When asked to indicate an outcome that is ‘unlikely’, participants indicate outcomes from the higher end of the distribution, often with a value exceeding the maximum value shown (Teigen, Juanchich & Riege, 2013). This result might be of considerable consequence: if communication recipients expect an ‘unlikely’ event to never occur, the use of ‘unlikely’ to indicate a low but possible level of risk could be dangerous. Using a mixed format approach (including a VPE and numerical expression) is posited to reduce misinterpretations but it is not known whether a) this increases understanding over and above a purely numerical format nor b) whether the expression’s order matters. Using the ‘which outcome’ methodology, we examined the effect of using verbal, numerical and mixed (verbal-numerical & numerical-verbal) communication formats. We replicated previous findings, with preference for outcomes at the high end of a distribution (including above maximum values) present in both verbal and verbal-numerical formats. Whilst estimates differed between the numerical and mixed formats, mixed formats yielded more accurate estimates, especially the verbal-numerical format. Participants’ numeracy did not affect interpretations.

Can ambiguity aversion explain choices in Newcomb’s problem?
SPEAKER: Mary Lawal

ABSTRACT. Recent evidence suggests a link between religious belief and thinking style. In particular, religious belief appears to be associated with more heuristic processing and endorsement of teleological explanations which violate causality. It has been suggested that the choice made in Newcomb’s problem could indicate whether or not a person is inferring reverse causality. Hence, in one experiment we examined whether the choice on Newcomb’s problem can be predicted by paranormal belief, need for cognition, tolerance of uncertainty and ambiguity aversion. There was a fairly even split between participants choosing only the opaque box or both boxes. Paranormal belief was not linked to the choice participants made. However, participants who chose both boxes displayed more ambiguity aversion than those choosing only the opaque box. The other variables failed to predict choices. We consider whether this link to ambiguity aversion may be related to anticipated regret by examining the reasons participants gave for their choice.

Associative Judgment and Vector Space Semantics
SPEAKER: Sudeep Bhatia

ABSTRACT. We study associative processing in high-level judgment using vector space semantic models. We find that semantic relatedness, as quantified by these models, is able to provide a good measure of the associations involved in judgment, and in turn predict responses in a large number of existing and novel judgment tasks. Our results shed light on the representations underlying judgment, and highlight the close relationship between these representations and those at play in language and in the assessment of word meaning. In doing so, they show how one of the best-known and most studied theories in reasoning and judgment research can be formalized in order to make precise quantitative a priori predictions for a large class of natural language judgment problems.

A Mixed-Methods Analysis of Bayesian Reasoning: Nested Sets vs Causal Framing

ABSTRACT. The ability to undertake Bayesian inference is rapidly becoming a necessary skill in the modern, data-abundant, society. This is particularly true within medicine, where patients and their doctors need to understand their chance of having a given disease, and law, where jurors and lawyers need to understand the value of statistical evidence. However, both professionals and the public make large errors in Bayesian estimations. Two distinct approaches have had success in improving accuracy on Bayesian problems: the ‘Nested Sets’ and ‘Causal’ approaches. For the first time, the paper compares and combines these in a 2x2 design. It also employs a large general-population sample, in contrast to previous work, and utilizes a numeracy measure for sub-group analysis. Finally, the study adopts a mixed-methods design, recording solvers’ solution processes using a ‘write aloud’ protocol. A large nested sets framing effect is found, but no causal framing effect is detected and this finding is replicated within high and low numeracy sub-groups. From the write aloud protocol, qualitative analysis reveals a 5-stage solution process which echoes previous theoretical work in the nested sets literature and is modal amongst successful solvers in all four conditions, regardless of problem framing.

17:20-18:30 Session 8: Keynote - Location: Macmillan 117


Constructing Perceptions and Preferences: From Psychophysics to Query Theory
SPEAKER: Elke Weber

ABSTRACT. Experimental psychology—going back to psychophysics in the 19th century—has demonstrated that human perception is constructive, that is, not a one-to-one mapping from objective reality to mental representation.  Behavioral decision research—drawing on cognitive science and more recently neuroscience—has done the same for the construct of preference. Contrary to the assumption of neoclassical economics that uses choice (and its implied preference relation) as a primitive, behavioral decision theory treats preference as an action selection that is constructed, and thus not only subjective but also contextual (Weber, 2004).  Blueprints for this constructive process range from signal detection theory to prospect theory and query theory.

In this talk I will survey the many ways in which perception (e.g., of risk) and preference have been found to be contextual, for example in the sense of providing relative rather than absolute impressions and evaluations. Drawing on examples of my own work and that of others, I show that the coefficient of variation (standard deviation divided by the expected value) of risky choice options (i.e., a relative evaluation of variability, per units of return) is a superior predictor of risk sensitivity in the choices of people and lower animals (Weber, Shafir, and Blais, 2004).  Another case in point is provided by preference reversals in response to normatively irrelevant changes in the choice context, e.g., a change in attribute labeling or in preference elicitation format.  These reversals are predicted by regularities in people’s construction of preference that is sensitive to reference points and the direction of comparisons. These regularities are described at a functional level by prospect theory (Tversky & Kahneman, 1992) and at a psychological process level by query theory (Weber & Johnson, 2006).  Examples include environmental decisions (Hardisty, Johnson, and Weber, 2010) and intertemporal choices that specify either the immediate or the delayed option as the choice default (Weber et al., 2007).

Preference and preference construction are complicated by goal conflict.  Apparent preference reversals are the consequence of differences in goal prominence in different choice contexts (Tversky, Sattath, & Slovic, 1988).  I will close by describing a study that provides causal evidence of this for intertemporal choice (Figner et al., 2010) and will speculate more generally on neural implementation mechanisms of preference construction.



Figner, B., Knoch, D., Johnson, E. J., Krosch, A. R., Lisanby, S. H., Fehr, E., and Weber, E.U. (2010). Lateral prefrontal cortex and self-control in intertemporal choice. Nature Neuroscience, 13, 538-539

Hardisty, D. H., Johnson, E.J., & Weber, E.U. (2010).  A dirty word or a dirty world? Attribute framing, political affiliation, and query theory. Psychological Science, 21, 86-92.

Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertaintyJournal of Risk and Uncertainty, 5, 297–323. 

Tversky, A., Sattath, S., Slovic, P. (1988) Contingent weighting in judgment and choice. Psychological Review, 95, 371-384.

Weber, E. U. & Johnson, E. J. (2006). Constructing preferences from memory.  In: Lichtenstein, S. & Slovic, P., (Eds.), The Construction of Preference (pp. 397-410).  New York NY: Cambridge University Press. 

Weber, E. U., Johnson, E. J., Milch, K., Chang, H., Brodscholl, J., & Goldstein, D.  (2007). Asymmetric discounting in intertemporal choice: A query theory account. Psychological Science, 18, 516-523.

Weber, E. U. (2004). Perception Matters: Psychophysics for Economists. In J. Carrillo and I. Brocas (Eds.), Psychology and Economics (pp. 165-176). Oxford, UK: Oxford University Press.

Weber, E. U., Shafir, S., & Blais, A.-R. (2004). Predicting risk-sensitivity in humans and lower animals: Risk as variance or coefficient of variation. Psychological Review, 111, 430-445.