ICT 2016: INTERNATIONAL CONFERENCE ON THINKING 2016
PROGRAM FOR FRIDAY, AUGUST 5TH
Days:
previous day
next day
all days

View: session overviewtalk overview

08:30-10:00 Session 9A: Symposium
Location: Macmillan 117
08:30
Mutual constraints in moral cognition and language
SPEAKER: Laura Niemi

ABSTRACT. In this symposium, we present new research investigating whether and how language and moral cognition constrain each other. First, De Freitas and colleagues ask: What is the relationship between the language people use to describe an event and their moral judgments? They find that people’s moral judgments lead them to reconstrue a causal event (the trolley problem) as either more or less direct and intended, which in turn shapes their verb choices. Direct harm is conveyed with a single causative verb (Adam killed the man), and indirect harm with an intransitive verb in a periphrastic construction (Adam caused the man to die). Relatedly, Niemi and colleagues demonstrate that a classic psycholinguistic test used to track how causation is encoded in verbs (implicit causality) signals people’s moral judgments. In the task, participants chose between a pronoun referring to the sentence subject or object to continue sentences in the form: “Agent verbed Patient because...” for verbs conveying harm and force and a set of filler verbs. Participants who chose the object (i.e., the patient/victim) for harm/force verbs more often were also more likely to (i) explicitly rate patients as having controlled, allowed, and deserved events, and agents as less necessary and sufficient; (ii) endorse moral values previously linked to victim-blaming; and (iii) hold hostile attitudes toward the social category represented by the object. Gantman and Van Bavel demonstrate that consequences of the interdependence between language and moral cognition extend to vision. Participants detected moral words presented at the threshold for conscious awareness with greater frequency than non-moral words — a phenomenon termed the moral pop-out effect. The effect persisted over and above effects of valence, extremity and arousal, and was heightened when moral motives (e.g., justice needs) were active. The research described so far suggests that individual words intrinsically encode moral relevance to varying extents. This possibility is investigated by Voiklis, Cusimano, and Malle, who derive a structured lexicon from a broad range of moral theory and folk intuitions which is then used to characterize social-moral regulation in dialogues and various genres of writing. Finally, Boroditsky synthesizes the findings with prior work and discusses their implications for a general understanding of how language and cognition constrain each other. Taken together, the findings demonstrate that scientific investigation into the roots of moral judgment facilitates a more intricate understanding of the interdependencies between language and cognition, and their possible driving mechanisms (e.g., causal representations).

Talks: 1. Kill or die: Moral judgment alter linguistic coding of causality Julian De Freitas (speaker), Peter DeScioli, Jason Nemirow, Maxim Massenkoff, Steven Pinker

2. Moral judgment during minimal language processing Laura Niemi (speaker), Joshua Hartshorne, Tobias Gerstenberg, Liane Young

3. Motivated moral word detection Ana Gantman (speaker), Jay Van Bavel

4. Using moral communication to reveal moral cognition John Voiklis (speaker), Corey Cusimano, Bertram Malle

5. Mutual constraints in moral cognition and language: Synthesis and discussion Lera Boroditsky (speaker)

08:30-10:00 Session 9B: Symposium
Location: Friedman Auditorium
08:30
Meta-Reasoning

ABSTRACT. In this symposium, five speakers will present papers in the emergent field of Meta-Reasoning. Metacognitive processes have been widely investigated in the fields of education and learning and meta-reasoning is a topic of interest in AI and in philosophy. To date, however, there has been relatively little work that applies the construct of metacognition to the psychology of reasoning. Meta-reasoning deals with the monitoring and control of reasoning processes, e.g., the decision to engage a problem or not, to terminate the effort at solving, to determine strategy choice, and to estimate the degree of confidence one has in a solution attempt. Ackerman will introduce a framework for studying meta-reasoning that is grounded in the seminal meta-memory work of Nelson and Narens. Mata and Topolinski explore the cognitive basis of two metamemory judgments: The Judgment of Solvability and Post-Decision Confidence, while Eskenazi explores the social and cultural determinants of confidence. Finally, Thompson will discuss how metacognitive processes play a role in terminating and initiating analytic thought processes.

Rakefet Ackerman and Valerie Thompson: “Meta-Reasoning: A Framework for Delving into Reasoning Regulation”. Technion-Israel Institute of Technology, Haifa, Israel; University of Saskatchewan, Saskatoon, Canada

André Mata: “Intuitive vs Rational Confidence” ISPA University Institute, Lisbon, Portugal

Sascha Topolinski: “Intuitions of Solvability and Their Biases”. University of Cologne, Cologne, Germany

Terry Eskenazi: "Metacognition Under Spell: Social Influence and Decision Confidence" Ecole Normale Supérieure, Paris, France

Valerie Thompson: “Metacognition and the Monitoring and Control of Reasoning Outcomes”. University of Saskatachewan, Saskatoon, Canada

08:30-10:00 Session 9C: Talks
Location: Watson CIT 165
08:30
Protocol Analysis Reveals Promiscuous Reasoning in Dyadic Coordination Games

ABSTRACT. Strategic interactions have been investigated by many, but how people reason in coordination problems is still not fully understood. In our study we use a novel research method in this field, protocol analysis, to better understand people’s reasoning in coordination games. We used concurrent verbal protocols and after a training phase 16 participants thought out loud as they engaged in the series of nine 2 × 2, 3 × 3, and 4 × 4 common interest games with a co-player. Participants were tested separately with no communication or feedback and were recorded with a digital voice recorder. The recordings were transcribed and then they were divided into meaningful segments representing distinct reasons for choosing strategies. Participants managed to coordinate on the payoff-dominant outcome on 84% of the games. Frequently used reasons for picking a strategy were Maximax and Vicarious Maximax (often together), followed by Team Reasoning and Avoid-the-Worst. To a much lesser extent there were instances of Level 1 and Level 2 reasoning, Equality-seeking, Vicarious Avoid-the-Worst and Relative payoff maximization. Surprisingly, most players, though not all, used multiple reasoning processes, not only between games but within the same game before settling on a strategy choice.

08:48
Thinking About Games and Questionnaires
SPEAKER: Mark Keane

ABSTRACT. The market research industry is currently in crisis as the conduct of questionnaires and consumer surveys has become harder, more expensive and less accurate. One proposal made to save this industry is the gamification of market research; the translation of conventional questionnaires into, for example, word-selection games played against people or computers (e.g., see games developed by Unfront Analytics). Cognitively, these gamified surveys can be cast as concept-elicitation tasks in which people are making diagnosticity decisions, around highly, context-sensitive conceptual representations. Adopting this theoretical perspective, we report several experiments to determine whether gamification delivers equivalent or “better” results than conventional questionnaires. In general, we find that, cognitively, gamified surveys are, at least, as good as conventional questionnaires in terms of information gain; though there is some evidence that games are more stable than traditional surveys. Apart from these novel findings, this study also introduces the notion of “ground truth regularities” and uses information-theoretic measures from text analytics.

09:06
Coordinate to cooperate or compete: Abstract goals and joint intentions in social interaction

ABSTRACT. In real life, cooperation and competition are abstract goals with respect to a given situation. One cannot directly choose to "cooperate" or "compete." Instead agents figure out how to realize social goals through joint intentions, a hallmark of human social intelligence (Tomasello 2014). We investigate human behavior in two-player stochastic games which are naturalistic spatial environments that require high-level reasoning about social goals and low-level coordination of actions to realize those goals (http://bit.ly/22S4rQv).

To model participant behavior we develop a novel computational model of agency that grounds cooperation and competition in different modes of planning. In the cooperative mode, like in team reasoning, a hypothetical group-player jointly plans to maximize group utility. In the competitive mode, players maximizes their individual utility by doing level-k best response over full intentional plans. These two modes of planning are unified through a strategic reasoner which uses Bayesian theory-of-mind to infer the intentions of the other player and acts reciprocally. Learning occurs across the planning hierarchy and social norms emerge which coordinate cooperation over time and generalize strategy over environments. The model qualitative and quantitative explains dynamics of human competitive and cooperative behavior.

09:24
Automatic generation of verb meanings: A big data approach
SPEAKER: Phillip Wolff

ABSTRACT. In this research we use a deep learning neural network to derive the meaning of 95% of the most frequently used verbs in English. A key hypothesis in modern semantics is that the meaning of a word can be specified in terms of semantic components. We demonstrate how these components can be determined objectively from statistical methods. Two large corpuses (Wikipedia and the NYT) were used to train the learning algorithm Word2vec (Mikolov et al., 2013). The results accorded with people’s judgments (N = 270 MTurkers) on a semantic similarity task of 1,200 verbs 74% of the time. In a second study, combining vectors associated with different verbs gave rise to vectors of highly similar verbs, e.g., ‘cause’ + ‘ice’ = ‘freeze’. An optimization algorithm led to the identification of such semantic components as CAUSE, CHANGE, CONTACT, and MOTION. The semantic components identified agreed highly with judgements of human raters. In a third study, the semantic components were used to automatically generated verb senses, which correlated in number with the number senses contained in several standard dictionaries. The results offer an initial glimpse into how symbolic representations might be derived from the statistical properties of text.

09:42
Social media and people’s representations of the future: A big data approach
SPEAKER: Phillip Wolff

ABSTRACT. The way people represent the future may provide a window into factors that affect their willingness to take risks and their subjective well-being. In three experiments, we use big data techniques to examine how such tendencies might be driven by people’s temporal horizons and mental models of the future. In Experiments 1 we used a database of 8 million tweets to calculate the average distance into the future people talk about in their tweets in order to determine the temporal horizon of each state in the US. States with further future horizons had lower rates of risk taking behavior (e.g., smoking, binge drinking) and higher rates of investment (e.g., education, infrastructure). In Experiments 2 and 3, we used an individual’s tweets to establish their temporal horizon and found that those with longer temporal horizons were more willing to wait for larger rewards less likely to take risks. In Experiment 4, we discovered though an automated analysis of the NYT five major categories of future-oriented thought and found that these categories predicted mental and economic well-being at both the state and individual level. The findings help establish powerful relationships between people’s thoughts about the future, their behaviors, and well-being.

08:30-10:00 Session 9D: Talks
Location: Barus and Holley 159
08:30
Ignoring irrelevant information: physical stereotypes vs. social stereotypes
SPEAKER: Jack Cao

ABSTRACT. Quality decision-making requires the ability to ignore irrelevant information. Using a Bayesian network of the form A → B → C, we tested the extent to which decisions about the likelihood of C successfully ignored information about A given information about B. We varied the semantic content of A → B to contain either a physical stereotype (e.g., sunny/cloudy → hot/cold) or a social stereotype (e.g., male/female → doctor/nurse). Participants (N = 1670) read a scenario laden with either a physical or social stereotype and then answered two questions about the likelihood of C given information about B and A. Both questions were identical expect each was conditional on a different state of A. Thus, the answer to both questions should have been the same.

Across all scenarios, answers to both questions differed, suggesting that physical and social stereotypes were difficult to ignore. However, social stereotypes proved even more difficult to ignore, leading to social decisions that were systematically biased in the direction of stereotypes compared to decisions in the physical domain. This result is robust to variations in the wording of the scenarios and suggests that social stereotypes are especially capable of unduly influencing conscious and effortful decision-making.

08:48
Asking and evaluating natural language questions
SPEAKER: Anselm Rothe

ABSTRACT. The ability to ask questions during learning is a key aspect of human cognition. In this project we explore how people generate and evaluate free-form questions in the context of a simple reasoning task. In a first experiment we gave people the ability to ask any question they like in natural language and analyzed the content and context-sensitivity of these questions. In a second experiment we had people evaluate natural language questions, which they did not themselves have to generate. Our results show that people can invent natural language questions that have high information value according to common mathematical measures of information quality. Across the two experiments, we show that people are far better at evaluating the quality of questions than they are at generating the questions in the first place. This suggests one major bottleneck on the effectiveness of information gathering in humans.

09:06
How confidence shapes learning and choice

ABSTRACT. Contemporary theoretical accounts of decision-making view confidence as a readout of information in the decision process (specifically, a subjective judgment about the probability that a choice is correct). In two experiments, we show that confidence judgment can itself shape the decision process. The simple act of making post-choice confidence judgments simultaneously increases accuracy and response time. By contrast, making a post-choice outcome judgment (predicting how much reward the choice will result in) decreases accuracy and while still increasing response time. Computational analysis with reinforcement learning and drift-diffusion models indicated that the differences between post-choice judgment conditions can be accounted for by parametric differences in learning rate, inverse temperature, decision threshold and the rate of evidence accumulation. Taking these measures together, the evidence suggests that rating their confidence made learners more sensitive to variations in reward probabilities. These findings suggest that explicitly considering decision confidence may induce a global shift in decision making strategy not anticipated by any current theory.

09:24
Interactions between cognitive strategy and task structure in relational learning

ABSTRACT. Expert thinking is marked by the ability to recognize commonalities in the relations among objects across superficially disparate situations (Chi, Feltovich, & Glaser, 1981), for example when the Swiss inventor George de Mestral developed the idea for Velcro based on how burrs got stuck in his dog’s fur. On the flip side, novice students acquiring the ability to look past the superficial to notice these deeper connections is one of the biggest challenges in formal education, and has long been a focus of education research (Whitehead, 1929). One line of research has examined which individual differences in cognitive strategy (i.e., how learners conceive of and approach the learning task) lead to successful relational learning (Pintrich, 1991), while a somewhat autonomous line of research has examined what task structures best foster successful learning (Gick & Holyoak, 1983). While there is a widespread intuition that there should be interactions between these individual differences and task structures, evidence for such interactions is minimal (Pashler, McDaniel, Rohrer, & Bjork, 2008). Here, we present a series of experiments using a novel relational learning paradigm and established measures of cognitive strategy demonstrating exactly these sorts of interactions. We discuss implications for cognitive theories and educational practice.

09:42
Stochastic Hypothesis Generation

ABSTRACT. How do humans approximate Bayesian inference when the task requires them to generate hypotheses? Previous models (e.g., Thomas et al., 2008) crucially depend on cued recall to generate hypotheses. However, this strategy is impractical in combinatorially complex hypothesis spaces, where relevant hypotheses may have to be constructed de novo rather than retrieved from memory. We present a novel algorithmic model of hypothesis generation based on Markov chain Monte Carlo sampling. The Markov chain generates hypotheses using local proposals and accepts them based on their probability. The accepted hypotheses are then used to construct a sample-based approximation of the posterior. As the number of generated hypotheses increases, the approximation converges to the true posterior. However, following Lieder et al. (2013), we assume that humans run the chain for a finite length, producing several well-known probability biases such as subadditivity, superadditivity, variance in responses, and anchoring. Additionally, our model makes a new prediction: the context-dependent inversion of subadditivity and superadditivity. These simulations suggest that resource-bounded sampling provides a plausible account of human hypothesis generation.

10:00-10:20Coffee Break
10:20-11:50 Session 10A: Symposium
Location: Macmillan 117
10:20
Dynamic inference and belief revision
SPEAKER: Mike Oaksford

ABSTRACT. We must frequently revise our beliefs in the light of new information. How we should do this against the backdrop of a large network of beliefs can be problematic. For example, in the philosophy of science it is known as the Quine-Duhem problem. Any prediction relies on many other auxiliary hypotheses that may protect a core theory or common sense rule from predictive failure. Logically understanding how our beliefs change requires an account of non-monotonic inference. Probabilistically, Bayesian conditionalization relies on new information not altering conditional probabilities by too much. However, very the simple examples suggest that new information can significantly alter the probability distributions over the knowledge we take to be relevant to an inference. In particular, this information may be specifically about a change in a conditional probability. Such non-monotonic changes in probability distributions have been addressed theoretically in the framework of minimising mutual information between old and new distributions. However, within the psychology reasoning there has been very little empirical research in this area. This symposium brings together philosophers, experimental psychologists, and modellers to present the current state of the field, to look at recent experimental work and theoretical developments. 

Henrik Singmann, K. Christof Klauer, & Sieghard Beller

Beyond updating: Disentangling form and content with the dual-source model of probabilistic conditional reasoning.

Ulrike Hahn & Peter J. Collins

How the world changes when we learn that "if ..., then..."

Igor Douven

Inference to the Best Explanation and the relevance of the closest competitor.

Karolina Krzyzanowska

Persuading with conditionals.

Mike Oaksford and Simon Hall

Learning in dynamic conditional inference.

Tania Lombrozo

Explaining for the best inference

10:20-11:50 Session 10B: Talks
Location: Friedman Auditorium
10:20
Causation in moral judgment: Both unique and overlapping neural representations
SPEAKER: Justin Martin

ABSTRACT. For moral judgment, causation is crucial: It means the difference between life in jail for murder, or 10 years for attempted murder. Certainly, a person’s intentions also matter, and past research demonstrates the centrality of both factors. Yet, while the neural bases of theory of mind (TOM) are well known, how we represent a person’s causal role during moral judgment is less understood. Here, we probe this process using a predictive coding framework. While undergoing fMRI, participants (N=20) read vignettes about two agents harming a victim. We contrast cases in which causal information required either minimal processing (e.g. the agents caused harm in an expected way) or extra processing (e.g. harm was caused in an unexpected way). Unexpected causation lead to activity in dorsolateral prefrontal cortex, and this region was insensitive to other (non-causal) manipulations of expectation. Intriguingly, causal processing also lead to activity in the TOM network. Potentially, this activity reflects a processing similarity between assigning causation and intentions to agents: Both involve attributing unobservable states. In total, we identify regions sensitive to an agent’s role in causing harm that are both selective (dlPFC) and overlapping (TOM network), elucidating critical nodes in the network of regions subserving moral judgment.

10:38
Inference of Intention and Permissibility in Moral Judgment

ABSTRACT. One puzzle of moral cognition is that while our moral theories are often described in terms of absolute rules (e.g., the greatest amount of good for the greatest number, or the doctrine of double effect), our moral judgments are graded. We hypothesize that since moral judgments are particularly sensitive to the agent's mental states, uncertainty in these inferred mental states might partially underlie these graded responses. While previous computational models have shown that mental states such as beliefs and desires can be inferred from behavior, they have critically lacked a third mental state, intentions, which play a central role in moral judgment. In this work, we develop a novel computational representation for reasoning about other people's intentions based on counterfactual contrasts over influence diagrams. This model captures the future-oriented aspect of intentional plans and distinguishes between intended outcomes and unintended side effects. Finally, we give a probabilistic account of moral permissibility which produces graded judgments by integrating uncertainty about inferred intentions with utilitarian maximization. By grounding moral permissibility in an intuitive theory of planning, we quantitatively predict the fine-grained structure of both intention and moral permissibility judgments in classic and novel moral dilemmas.

10:56
‘Technology-driven’ moral training: Moral reasoning in the emergency services

ABSTRACT. Existing dual-process theories of moral judgment have interpreted distinct responses to text-based moral dilemmas through cognitive versus emotional perspectives. These have received minimal attention in technological contexts that aim to study simulated moral actions. Given that advanced Virtual Reality (VR) simulations are now utilised in professional training practises that pivot on high-conflict moral decision-making, this has never been more critical. We introduce emotionally arousing moral scenarios that require action. Specifically, in a virtual version of the Footbridge dilemma, we found a greater endorsement of utilitarian responses (killing one to save many others) compared to text-based counterparts in normal populations and in healthcare professionals. Importantly, in post-responses, while normal populations reported feelings of regret after eliciting virtual utilitarian actions, paramedic practitioners demonstrated no remorse for virtual moral actions. In a follow-on investigation, moral decision-making was assessed in Fire and Rescue Service Command Officers, who currently incorporate VR training within their practises. These findings are discussed in light of existing dual-process models of moral judgment. The diverging responses found here between text-based and technological contexts, suggest that moral judgment and simulated moral action may be driven by partially distinct mechanisms; resulting in significant implications for technology-driven moral assessment tools.

11:14
Cognitive fatigue and moral reasoning
SPEAKER: Shane Timmons

ABSTRACT. We report the results of two experiments that show that people make different moral judgments when their cognitive resources have been depleted. The first experiment shows that people judge that it is morally permissible to decide not to harm a person in order to save others, more often when they have completed tasks designed to exhaust cognitive resources compared to when they have not. The second experiment shows that people feel worse about their moral decisions when they have completed an everyday activity that exhausts cognitive resources, i.e. when they have attended an evening lecture on statistics. The results suggest that even emotionally charged moral decisions require cognitive resources. We consider the implications of the results for alternative theories of moral reasoning.

11:32
Inductive Ethics: A Bottom-Up Taxonomy of the Moral Domain
SPEAKER: Justin Landy

ABSTRACT. Moral Foundations Theory (MFT) is a widely cited theory of descriptive ethics that posits that people moralize at least six distinct kinds of virtues. These virtues are divided into “individualizing” virtues (care, fairness, and liberty) and “binding” virtues (authority, loyalty, and purity). Despite widespread enthusiasm for MFT, it is unknown how well it models people’s conceptualizations of the structure of the moral domain. In this research, we take a bottom-up approach to this question, using methods from the study of inductive reasoning. In Study 1, participants rated the likelihood that a person would commit various moral violations, given information about prior violations. Likelihood judgments were used to derive an atheoretical taxonomy of the conceptual relatedness of different moral virtues, using hierarchical cluster analysis and multidimensional scaling. This taxonomy does not resemble MFT, and, in particular, we found no evidence for the individualizing-binding distinction. In Studies 2 and 3, we show that likelihood judgments of future behaviors in new samples more closely conform to the predictions of our derived taxonomy than of MFT. By providing a more accurate picture of how people parse their moral worlds, this research helps to clarify a fundamental question in the cognitive science of morality.

10:20-11:50 Session 10C: Symposium
Location: Watson CIT 165
10:20
The ontogeny and phylogeny of relational reasoning
SPEAKER: Caren Walker

ABSTRACT. The ability to represent relations is one of the cornerstones of abstract thought. Some have proposed that it may be the key to the cognitive differences between humans and other animals. This symposium takes a multidisciplinary, multi-method approach, combining work in cognitive development, comparative psychology, and neuroscience to better understand the ontogeny and phylogeny of relational reasoning.

Caren Walker (UCSD) and Alison Gopnik (Berkeley) – More than meets the eye: Early relational reasoning cannot be reduced to perceptual heuristics - Describes a surprising developmental pattern: Younger learners are better than older ones at inferring same-different relations. One possible explanation is that younger children rely upon perceptual heuristics, as has been suggested for nonhuman animals. The authors demonstrate that children fail to make this inference in a task that is matched for perceptual entropy, suggesting that early competence is due to genuine conceptual understanding.

Christian Hoyos, Ruxue Shao, and Dedre Gentner (Northwestern) - Language is a double-edged sword for relational reasoning - Proposes that older children’s failure may result from an object focus that is induced by noun-learning. The authors present research showing that 4-year-olds (who normally pass RMTS tasks) fail after engaging in an object-labeling pre-task, but not after an action-labeling pre-task. They argue that nominal language may have a temporary detrimental effect on relational reasoning.

Nina Simms, Rebecca Frausel, and Lindsey Richland (University of Chicago) - Social supports for relational orientation - Provides evidence that social interactions cultivate a relational mindset. Several studies demonstrate that children who contribute more descriptions of relational similarity to discussions of analogies with their parents are better at independently solving subsequent analogies.

Mike Vendetti (Oracle) and Silvia Bunge (Berkeley) - Evolutionary and developmental changes in latero frontoparietal cortex - Reviews evidence that small neuroanatomical changes may have led to major changes in relational reasoning. The authors propose that versatility in reasoning can be traced back to developmental and evolutionary changes in the lateral frontoparietal network (LFPN). In particular, stronger communication between regions of the LFPN may support improvements in relational reasoning.

Roger Thompson (Franklin & Marshall College) - Analogical reasoning by (other) animals: Fact or fiction? - Studies reveal that some species of old- and new-world monkeys can engage in relational reasoning. These results contrast with claims pointing to a ‘profound disparity’ between humans and chimpanzees on the one hand, and monkeys on the other. The author considers experiential, behavioral, and cognitive factors underlying similarities and differences in relational reasoning both within- and between-species.

10:20-11:50 Session 10D: Talks
Location: Barus and Holley 159
10:20
The Compositional Nature of Intuitive Functions
SPEAKER: Eric Schulz

ABSTRACT. How do we learn about functional structure? We propose compositionality, the ability to perform computations over mental building blocks, as an explanation and operationalize this intuition as a structural inductive bias induced by a grammar over Gaussian Process base kernels. It is assumed that these kernels can be combined by simple arithmetic operations and that the resulting process allows us to reason about more complex functions as made up of more simplistic structural primitives. Across a series of experiments, we show that participants prefer function extrapolations that are generated by structured kernels over unstructured ones, manually extrapolate functions in line with structured predictions, converge to a posteriori likely compositional predictions within a Markov chain Monte Carlo with people approach, and perceive compositional functions as more predictable than their similar but non-compositional counterparts. We argue that people's intuitive functions are compositional by nature and that compositionality is a sensible way to simplify structural inference.

10:38
The benefits of imperfect memory for hedging against incorrect beliefs: a rational account

ABSTRACT. Memory representations appear to deteriorate with time, whether due to explicit decay or to interference. We show that such deterioration is in fact beneficial for agents making multi-attribute decisions if the agents have incorrect beliefs about contingencies in the world. Using a novel Bayesian, sequential sampling theory of decision making under dynamic context, we provide a unifying model of classic experimental paradigms that require participants to combine information from perception and memory to make decisions, potentially in the face of response bias (e.g. Flanker task, AX-CPT, Posner cueing, repetition priming). Our model is able to simulate performance across such tasks, and is also able to identify contexts in which memory deterioration is paradoxically beneficial for performance:(1) when initial encoding is typically very weak, and (2) when agents have incorrect estimates of the conditional dependencies between stimuli. This suggests that agents who have potentially inaccurate beliefs about the world should let their memories deteriorate, providing a new computationally rational theory of short-term memory in the service of decision-making. More generally, this work provides an intriguing connection between work on short-timescale rational inference and work on apparent heuristics and biases in human behavior.

10:56
A Cognitively Realistic Model of Decision Making in Ocean Ecology

ABSTRACT. There is strong evidence that a key determinant of the ecological state of the world’s oceans is the decision-making of fishers and policy-makers. There is a large empirical literature on the complex reality of human decision-making, but a comparative lack of work bringing detailed cognitive facts to models of aggregate behavior of this kind. We show how a psychologically realistic description of decision-makers can be integrated into a large-scale ocean systems model, going beyond profit maximization agent-based models. In particular, we seek to model the questions that frame decisions of different types of fishers and the motivating reasons on the basis of which those questions are resolved. The broader purpose of the model is to see how those patterns could inform regulatory policy. We present the basic architecture of the model and the data gathering tools that will allow us to adapt the model to fishers in different environments.

11:14
How is argument evaluation unbiased?

ABSTRACT. According to the argumentative theory of reasoning (Mercier & Sperber 2011), humans should have the ability to evaluate others’ arguments in an unbiased way. Experimental results suggest that, on the contrary, individuals are especially critical of arguments whose conclusion they disagree with or whose source they do not trust. We suggest that these results do not reflect only the operation of argument evaluation, but also that of other mechanisms, including the production of counter-arguments that follows the evaluation of non-demonstrative arguments. To circumvent this source of bias, we rely on problems for which demonstrative arguments can be offered, so that the production of counter-arguments has little or no effect. We found that once this confound is removed, argument evaluation is essentially unbiased: - Participants’ likelihood of accepting an argument was not affected by how wrong they believed the argument’s conclusion to be. - Participants’ likelihood of accepting an argument was either not affected by how much they trusted the source, or the effect of source trustworthiness was likely not to stem from biased argument evaluation. These results suggest that individuals have the ability to evaluate arguments without being biased by their prior beliefs about argument’s conclusion or argument’s source.

11:32
Can a Bayes’ Net approach capture intuitive use of sequential testimonies in a legal reasoning paradigm?

ABSTRACT. Legal reasoning research shows that evaluating witness credibility (e.g., truthfulness, accuracy) is important for determining the inferential value of testimony. We present two studies that apply a Bayesian source credibility model to a legal setting to test epistemic influence of witness testimonies. The model combines perceived witness trustworthiness and access to accurate information as independent elements, describing and predicting the impact of the testimony of that particular witness. Participants read sequential statements from witnesses and judged guilt after each statement. In Study 1 witness trustworthiness, report and witness order were varied. Study 2 also varied witnesses’ access to accurate information. The source credibility model has good fit with population observations in both studies. There was a reasonable fit between individual predictions were well predicted given the simplicity of the model. Comparison of the sequential model to simpler model showed that sequential model provided a better fit for the data. In sum, the studies suggest the applicability of a Bayesian source credibility model in a legal setting to account for the impact of different witness types. We show that participants are sensitive to the type of witness and that different witnesses have a predictable impact on the perception of the testimony.

11:50-13:00 Session 11: Keynote - Location: Macmillan 117
11:50
The Evolution of Human Thinking

ABSTRACT. Great apes cognitively represent and make inferences about their experience of the world. Humans, in addition, represent their experience perspectivally and “objectively”, and they make inferences about it recursively and reflectively. The Shared Intentionality Hypothesis posits that these uniquely human forms of cognitive representation and inference emerged evolutionarily as cognitive adaptations for dealing with a distinctive form of social life, specifically, a form in which individuals had to coordinate their intentional states with others in cooperative, and ultimately cultural, activities. Learning to coordinate and communicate with others in these ways during ontogeny creates uniquely human objective-reflective-normative thinking.

 

13:00-14:00Lunch Break - Kasper Multipurpose Room
13:00-14:00 Session 12: Poster Session
13:00
Neural Correlates of Relational Reasoning: An ALE meta-analysis of its neural basis and modalities

ABSTRACT. Relational reasoning is a specific subtype of reasoning in which the premises compromise two or more objects and a piece of relational information linking them. For the reasoner, the main task is to abstract this relational information. An important and frequently reported neural basis specifically for relational reasoning is the posterior parietal cortex (PPC). In order to gain further insights about the specific subcomponents of the PPC involved in relational reasoning and with special regard to the superior parietal lobule (SPL), we conducted an Activation Likelihood Estimation (ALE) meta-analysis considering 40 neuroimaging studies. We tested for two types of reasoning, inductive and deductive, to differentiate neural activation involved in different types of reasoning processes. Further, we discerned and contrasted four modalities of reasoning, namely visual, visuospatial, spatial and abstract relational reasoning. We could support the frontal and parietal cortices’ involvement in relational reasoning, as predicted by the assumption that reasoning is facilitated by the fronto-parietal network. Additionally, we found that the SPL is consistently activated across categories but exhibits increasing involvement with rising of the premises’ abstractness which further supports its contribution to the generation of mental models representing the premises.

13:00
Is the color of “Culture” visual or auditory? A study on abstract concepts with the Extrinsic Simon task

ABSTRACT. Abstract concepts – concepts the referents of which are not material, perceivable, single, concrete entities – constitute a challenge for the grounded accounts of knowledge (Mahon & Caramazza, 2008). Recent theories on abstract concepts (Words As Tools, WAT: Borghi & Cimatti, 2009 and Borghi & Binkofski, 2014) posit that while both concrete and abstract concepts activate sensorimotor networks, the linguistic network is activated more by abstract than by concrete concepts given that the mode of acquisition of abstract concepts relies more on language. As a consequence, the acoustic modality is relevant for abstract concepts since the correspondent words and the verbal explanation of their meaning would be activated. To test this hypothesis an experiment with the Extrinsic Simon Task (De Houwer, 2003) was conducted. Participants classified visual and auditory white words (e.g., “bright”, “echoing”) on the basis of their meaning and concrete and abstract colored words (e.g., “horse”, “culture”) on the basis of their color. Reaction Times for the colored words conveying abstract concepts were faster when the correct response was the response that was also assigned to auditory white words (p < .05). This constitutes an implicit evidence that abstract concepts are grounded in sensory modalities and activate the acustic modality.

13:00
Mitigating against cognitive bias when eliciting expert intuitions
SPEAKER: Kate Devitt

ABSTRACT. Experts are increasingly being called upon to build decision support systems. Expert intuitions and reflective judgments are subject to similar range of cognitive biases as ordinary folks, with additional levels of overconfidence bias in their judgments. A formal process of hypothesis elicitation is one way to mitigate against some of the impact of systematic biases such as anchoring bias and overconfidence bias. Normative frameworks for hypothesis or ‘novel option’ elicitation are available across multiple disciplines. All frameworks acknowledge the importance and difficulty of generating hypotheses that are a) sufficiently numerous b) lateral and c) relevant and d) plausible. This paper explores whether systematic hypothesis generation can generate the desired degree of creative, ‘out-of-the-box’ style options given that abductive reasoning is one of the least tractable styles of thinking that appears to shirk systematization. I argue that while there is no universal systematic hypothesis generation procedure, experts can be exposed to deliberate and systematic information ecosystems to reduce the prevalence of certain types of cognitive biases and improve decision support systems.

13:00
Complex Emergent Modularity – A new approach reasoning research
SPEAKER: Colin Wastell

ABSTRACT. Building on Wastell (2014) we discuss the contribution that the Global Workspace Theory (GWT) can make to the Complex Emergent Modularity (CEM) and the parallels that underpin these potentially complementary theories. This combination will then be explored in light of recently published empirical data which highlighted the importance of expertise in mathematical reasoning and examined the relationship between expertise, working memory and problem solving (Purcell, Wastell & Sweller, in press). We then explore how the CEM/GWT combination could facilitate the generation of specific and testable hypotheses, particularly in the areas of analogical reasoning, and creativity.

13:00
The Permeability of Fictional Worlds

ABSTRACT. Real people sometimes appear in fiction, for example, Napoleon appears in War and Peace. However, readers may independently believe, without the explicit direction of the author, that a person who never actually appears in a novel could potentially appear there. For example, Queen Victoria never appears in David Copperfield, but readers may believe that she could appear there. In two experiments, we find evidence that readers think that a real person could appear in specific novels and physically interact with a character, given spatial and temporal constraints. Experiment 1 indicates that this potential contact is most likely to occur when the real person is in the same geographical region as the novel’s setting; for example, FDR is more likely to appear in The Great Gatsby than Stalin is. Experiment 2 shows possible contact is more likely to occur for congruent eras; readers believe that JFK is more likely to appear in To Kill a Mockingbird than Lincoln is. These results indicate that readers can import concurrent and co-regional real-world figures into a fictional world, suggesting that fictional worlds are more permeable than originally believed.

13:00
The Role of Trust in the Social Heuristic Hypothesis

ABSTRACT. Are we intuitively cooperative or selfish? According to the social heuristic hypothesis (SHH) people develop cooperative intuitions in their daily life that guide their behavior in unrelated domains. Specifically, the SHH argues that the relation between intuition and cooperation is moderated by trust in daily life interactions and experience with economic games. While various studies have provided support for the SHH there are several open questions. In the present paper we considered the impact a behavioral manipulation of trust, as well as alternatives measures of it. In addition, we examined an individual difference moderator: preference for information processing. Results showed that a behavioral manipulation of trust increases cooperation. This appears to be a robust effect driven by information processing preferences rather than manipulations of cognitive processing. In addition, an experimental measure of trust was a better predictor of cooperation than self-report. We discuss the implications of our findings for the psychology of trust and cooperation, and suggest potential interventions to increase cooperation in the field.

13:00
Algorithms and Stories
SPEAKER: Teed Rockwell

ABSTRACT. Human knowledge was traditionally considered to be something that was stored and captured by words. Today, many argue that all genuine scientific knowledge is in the form of mathematical algorithms. However, neurocomputational algorithms can be used to justify the claim that there is genuine knowledge which is non-algorithmic. The fact that these algorithms use prototype deployment, rather than mathematics or logic, gives us good reason to believe that there is a kind of knowledge that we derive from stories that is different from our knowledge of algorithms. Even though we would need algorithms to build a system that can make sense out of stories, we do not need to use algorithms when we ourselves embody a system that learns from stories. The success of algorithms in the physical sciences has often resulted in an attempt to mathematize the humanities. The dynamic neurocomputational perspective can give us a better understanding of how we get knowledge and wisdom from the stories told by disciplines such as Literature, History, Anthropology and Theology. This new neurological data can be used to justify the traditional pedagogy of these disciplines, which originally stressed the telling of stories rather than the learning of algorithms.

13:00
Effects of dyslexia on problem solving: Strategies and interventions for syllogistic reasoning
SPEAKER: Kay Rawlins

ABSTRACT. When solving syllogisms, people can adopt either a spatial strategy, where spatial relations illustrate relations between terms, or a verbal strategy where the problem is represented in terms of letters and relational rules (Ford, 1995). People with dyslexia tend to adopt a spatial strategy when solving syllogisms while people without dyslexia tend to adopt a verbal strategy (Bacon, Handley & McDonald, 2007). But how fixed are these strategic approaches? Prowse-Turner and Thompson (2009) demonstrated that training on logical necessity and the possibility of multiple representations of the premises increased accuracy on syllogisms. We tested whether training that focuses on verbal or spatial representations of the problems affected performance for people with and without dyslexia.

Syllogisms were categorised into those easiest to solve for verbal reasoners, easiest for spatial reasoners, and equal for both types of reasoners, based on Ford’s (1995) results. Two training methods were used, a verbal method based on mental rules, and a spatial method based on Euler’s Circles. Overall, people with dyslexia performed worse on verbal and verbal/spatial problems, but did not differ from people without dyslexia on spatial problems. Training was effective in encouraging participants to switch solution strategies, but this appears independent of dyslexia group.

(I would like to submit this as a poster only)

13:00
To Daydream is to Imagine Events: Conceptual, Theoretical, and Empirical Considerations

ABSTRACT. People’s minds often wander. A graduate student experiencing difficulty writing her dissertation might be distracted by her grungy keyboard, leading her to think of obsolete electronics at garage sales, leading her to think about playing basketball on a driveway (etc.). Or, noticing the keyboard could lead to her imagining an event: A newly-minted assistant professor, she sits in her finely appointed office, proofreading her in-press article on a state-of-the-art computer (etc.). The student’s chain of thought is different than her imagined event in ways that are of significance to psychological science. Yet, the two kinds of cognition often are labelled interchangeably as mind-wandering and daydreaming. I argue for restricting the definition of daydreaming to ‘imagining events.’ In support, I give a brief historical account of researchers’ varied definitions of daydreaming and show that this variation, and the associated equating of the concept with mind-wandering, is problematic. I next demonstrate that my definition represents the core of the concept, resists conflation with mind-wandering and other concepts, does not entail a strong theoretical position, and is in accord with lay understanding of the term. I conclude with an overview of research on daydreaming (as defined here) to establish promising future research directions.

13:00
Are Japanese People More Trusting And Cooperative Than Europeans? Evidence From Centipede Games With Commitment-Enhancing Tools

ABSTRACT. Theories of trust have distinguished between general trust in situations of social uncertainty—typical of US culture—and assurance-based trust in committed, long-term relationships—typical of Japanese culture. This paper aims to test these theories and investigate European-Japanese differences between trust and cooperation with a behavioral task: the Centipede game. This game involves two players repeatedly choosing between cooperation and defection. Cooperation maintains the relationship while benefitting both players in the long run whereas defection terminates the relationship with a favorable outcome to oneself. The Centipede game is, thus, a model of repeated reciprocal interactions necessitating assurance-based trust to sustain cooperation. In addition to the standard Centipede game, this study included a treatment condition offering players the option of purchasing commitment-enhancing tools to further increase social certainty. Overall, Japanese participants were more cooperative than Europeans, confirming higher assurance-based trust in the Japanese sample. Furthermore, Japanese participants purchased more commitment-enhancing tools, demonstrating their preferences for social certainty. The purchase of commitment-enhancing tools significantly improved cooperativeness in both groups. However, the Japanese differed from the European participants as they interpreted refusal to purchase commitment-enhancing tools as a signal of non-cooperative intent, causing a decrease in assurance-based trust and consequently also of cooperativeness.

13:00
Ultimate attribution error and memory distortion support racial bias in moral judgment
SPEAKER: Andrej Findor

ABSTRACT. Biased moral judgment related to perceived membership in ethnic and racial groups is widely researched, but little is known about what validates it and makes it resistant to change. In the present study, we hypothesized that the ethnic identity of people committing a morally ambiguous behaviour influences the moral judgment of this act, the dispositional and situational attributions of responsibility, and the memory of the behaviour. A representative sample of 387 Slovaks read the story of a man working illegally while receiving social benefits. In a between-subjects design, the protagonist was either Slovak (in-group), Hungarian (out-group) or Roma (stigmatised out-group). Participants were harsher when judging and punishing the Roma protagonist than the Slovak or Hungarian. Bias in moral judgment was significantly related to negative stereotypes about the Roma. Participants tended to attribute dispositional responsibility to the Roma protagonist and situational responsibility to the Slovak and Hungarian protagonist. Participants had a more negatively biased recall of the Roma family situation than of the Slovak or Hungarian situation. Thus, the biased responsibility attributions and memories contribute to eliciting and perpetuating ethnically biased moral judgment. The novel group-centred approach to moral judgment adopted here should complement the current act-based and person-centred approaches.

13:00
Poster Session
SPEAKER: .
14:00-15:30 Session 13A: Symposium
Location: Macmillan 117
14:00
Dynamic inference and belief revision
SPEAKER: Mike Oaksford

ABSTRACT. We must frequently revise our beliefs in the light of new information. How we should do this against the backdrop of a large network of beliefs can be problematic. For example, in the philosophy of science it is known as the Quine-Duhem problem. Any prediction relies on many other auxiliary hypotheses that may protect a core theory or common sense rule from predictive failure. Logically understanding how our beliefs change requires an account of non-monotonic inference. Probabilistically, Bayesian conditionalization relies on new information not altering conditional probabilities by too much. However, very the simple examples suggest that new information can significantly alter the probability distributions over the knowledge we take to be relevant to an inference. In particular, this information may be specifically about a change in a conditional probability. Such non-monotonic changes in probability distributions have been addressed theoretically in the framework of minimising mutual information between old and new distributions. However, within the psychology reasoning there has been very little empirical research in this area. This symposium brings together philosophers, experimental psychologists, and modellers to present the current state of the field, to look at recent experimental work and theoretical developments. 

David Over & Jean Baratgin

Belief change in psychology: Theoretical distinctions.

Jean Baratgin, Brian Ocak, Hamid Bessaa, & Jean-Louis Stilgenbauer

Natural updating: Evidence from eye tracking.

Jiaying Zhao

Implicit updating of representations based on statistical regularities.

Sangeet Khemlani

Inferential dynamics from possibilities.

Stephan Hartmann

The no alternatives argument and the relevance of beliefs about alternative theories

14:00-15:30 Session 13B: Symposium
Location: Friedman Auditorium
14:00
Legal Reasoning

ABSTRACT. The term “legal reasoning” usually refers to the reasoning done by appellate judges as they rule on legal issues. And learning to “think like a lawyer” usually refers to the training law students get that enables them to understand those rulings and to make legal arguments. But there is so much more to legal reasoning in terms of who does it, what it concerns, and how it is done. Spellman’s mini-keynote will briefly describe how psychology has (finally) become a respected type of analysis in law (as long as you don’t call it “psychology”). She will illustrate ways in which topics that have been well- represented in our Thinking Conference over the years can inform various aspects of legal reasoning including beliefs about how judges (really) reason, what the rules of evidence ought to say, and analyses within discrimination law. Schauer, author of a book called, “Thinking Like a Lawyer” will describe how judges’ reasoning is not supposed to be a free-for-all game of who can make the most of analogical reasoning. Rather, judicial reasoning is supposed to involve the constraints of legal rules and legal precedents. But can it and does it? Wilkinson-Ryan will explore an instance of informal legal reasoning that most of us engage in every day: contract interpretation. Consumer contracting is a ubiquitous and unavoidable fact of modern life, so much so that contract formation and interpretation cannot possibly depend on subjective agreement to known terms, because no one reads boilerplate (and nor should they). What do people think they are getting into when they sign without reading? How do they understand their own obligations, and the obligations of the drafter? Lombrozo will address laypeople’s moral judgments within the category of strict liability crimes – that is, crimes that do not require criminal intent. Laws that can be violated without any kind of guilty intent allow investigations of the import of rule violations (bad acts) separate from the import of mental state (guilty minds) in moral and legal judgments. Mikhail will argue that key elements of harmful battery and other legal wrongs form critical building blocks of moral cognition and that data from a wide range of experimental studies can be best explained with reference to unconscious legal reasoning.

Organizer Barbara A. Spellman

Speakers Barbara A. Spellman (University of Virginia School of Law) Frederick Schauer (University of Virginia School of Law) Tess Wilkinson-Ryan (University of Pennsylvania Law School) Tania Lombrozo (UC Berkeley) John Mikhail (Georgetown University Law Center)

14:00-15:30 Session 13C: Symposium
Location: Watson CIT 165
14:00
Dual Process Theory 2.0
SPEAKER: Wim De Neys

ABSTRACT. The two-headed, dual process view of human thinking has been very influential in the cognitive sciences. The core idea that thinking can be conceived as an interplay between a fast-intuitive and slower-deliberate process has inspired a wide range of psychologists, philosophers, and economists. However, despite the popularity of the dual process framework it faces multiple challenges. One key issue is that the precise interaction between intuitive and deliberate thought processes (or System 1 and 2, as they are often referred to) is not well understood. There is little dispute that sometimes intuitions can be helpful and sometimes deliberation is required to arrive at a sound conclusion. But how does our reasoning engine decide which route to take? Are both processes activated simultaneously from the start or do we initially rely on the intuitive system and switch to deliberate processing when it is needed? But how do we know whether deliberation is needed and determine whether merely relying on our intuitions is warranted or not? The various speakers in this symposium will give an overview of empirical work and recent advances in dual process theorizing that started to focus on these fundamental outstanding issues.

Speakers and talks

A Three-Stage Dual-Process Model of Analytic Engagement Gordon Pennycook, Jonathan Fugelsang, & Derik Koehler (University of Waterloo, Canada)

Reasoning About Structure and Knowledge: A Case For Parallel Processing Dries Trippas (Max Planck Institute for Human Development, Germany) & Simon Handley (Macquarie University, Australia)

Dual Process Theory: Evidence from Event-Related Potentials Adrian Banks (University of Surrey, UK)

Logical Intuitions and Other Conundrums for Dual Process Theories Valerie Thompson (University of Saskatchewan, Canada)

Thinking Slow … “Fastly”: Towards a Hybrid Dual Process Future? Wim De Neys (CNRS & Paris Descartes University, France)

14:00-15:30 Session 13D: Symposium
Location: Barus and Holley 159
14:00
Rationality and normativity: The place of normative models in higher mental processing

ABSTRACT. We acknowledge generous sponsoring from the Priority Program New Frameworks of Rationality (SPP1516)

Duration: Double-length symposium

Symposium abstract: What is human thinking like? What ought human thinking be like? There is an unavoidable tension between these two questions, the is question and the ought question. For more than half a century, theorists highlighted the infamous normative-descriptive gap: that human thinking deviates from normative standards such as extensional logic and the probability calculus. But is the gap as wide as it seems, and should we care? The answer depends on which side on the rationality debate you are. Panglossians see both the is and the ought of human rationality as well-aligned: humans reason, make decisions, and judge moral action as they ought to. For Panglossians, the is and the ought are evolutionarily programmed to coincide: humans conform to normative Bayesian rules as a matter of basic survival. Meliorists, on the other hand, highlight the normative-descriptive gap, focusing on the discrepancies between behaviour and normative systems. For Meliorists, the is and the ought of rationality seldom match, or match only for a handful of gifted, motivated reasoners. What both Panglossians and Meliorists share is a normativist view point: They agree that rationality should be measured against normative standards. In contrast, for Descriptivists, questions of ought are beyond the pale of the psychology of thinking, not least because asking them invariably leads psychologists to draw the dubious inference from is to ought. Two recent discussion, in BBS and in Frontiers in Cognitive Science, identified several important issues such as the seemingly soft boundaries between normativism and descriptivism; the distinction between normative theories and computational-level theories; the distinction between different types of rationality (in particular normative versus instrumental); whether the Bayesian framework in cognitive science can be considered descriptive as well as normative; proposals for soft normativism; and the role of context and relativity of thinking. In this symposium we explore issues of rationality and normativity in human higher mental processing: reasoning, decision making, and moral judgement. The symposium addresses some of the most fundamental questions that the New Frameworks of Rationality priority program (http://www.spp1516.de/en/index.html) seeks to address: Are humans rational? If so, how? If not, why not? How do we judge? Much in line with the vision of the New Frameworks for Rationality, the symposium is interdisciplinary.

#1 session, Friday, 5th, 14:00-15:30 Introduction with Markus Knauff (15 min) (1) Ulrike Hahn (Birkbeck College): Why normative status matters, and why Quantum Probability fails (25 min) (2) Ed Stupple (Derby University) and Linden Ball (University of Central Lancashire): In defence of soft normativism in reasoning research (25 min) (3) Mikkel Gerken (University of Edinburgh): Is epistemology a normative guide to the empirical study of folk epistemology? (25 min)

15:30-15:50Coffee Break
15:50-17:20 Session 14A: Symposium
Location: Macmillan 117
15:50
The bat and ball problem
SPEAKER: Andrew Meyer

ABSTRACT. The Bat and Ball problem has been upheld as a thin slice measure of the disposition or ability to engage in reflective thought, and is now included as a covariate in many studies. Performance on it has been shown to correlate with intertemporal choice, risky choice, moral reasoning, and belief in god. However, there is no empirically validated account of why people miss the problem at such high rates -- why people conclude that the titular objects cost 10 cents and $1.00, despite the specification that their prices differ by $1.00. Any such account requires two parts: one to explain the initial belief that the ball and bat cost 10 cents and $1.00, and one to explain the failure to override that belief. The data are most consistent with an initial response reflecting the original attribute substitution account, followed by an approximate check that is faithful to the actual problem text. The 10 cent error is most likely committed, not because people fail to check, but because the difference between 10 cents and $1.00 is close enough to $1.00 to satisfy the check that they do perform.

16:08
What type of evidence counts for dual process theories of conditional reasoning?

ABSTRACT. Dual process theories postulate two qualitatively distinct types of processes that can be activated to perform the same cognitive function. However, dual claims differ across explanations of different cognitive capacities, and there is no general consensus about the distinctive characteristics of each type of process, nor about their interactions. We evaluate the adoption of the dual framework in the field of conditional reasoning with regard to, first, the space of alternatives in the debate between unitary and dual approaches to conditional reasoning, and second, the type of evidence that would count for or against each alternative. We focus on the answers given by alternative dual process accounts to three central questions: (i) whether the primary level of analysis is that of processes or systems, and how those levels are to be identified, (ii) what are the central features that distinguish types of processes or systems; and (iii) whether and how the different processes or systems interact. We consider the implications of this analysis for alternative theories of conditional reasoning in the light of the empirical evidence that distinguishes them.

16:26
Fast cognitive reflection?: Examining the time course assumption of dual process theory
SPEAKER: Bence Bago

ABSTRACT. Influential dual process models of human thinking posit that reasoners typically produce a fast, intuitive heuristic (i.e., Type-1) response which might subsequently be overridden and corrected by slower, deliberative processing (i.e., Type-2). In four experiments, we directly tested this time course assumption with the infamous bat-and-ball problem. We used a two response paradigm in which participants have to give an immediate answer and afterwards are allowed extra time before giving a final response. To knock-out Type 2 processing and make sure that the initial response was intuitive in nature, we used concurrent load and a strict response deadline on the first response. Our key finding is that we frequently observe correct responses as the first, immediate response. Response confidence and latency analyses indicate that these initial correct responses are given fast, with high confidence, and in the face of conflicting heuristic responses. Follow-up studies that tested people’s response justifications further confirm that the initial correct responding is intuitive in nature. We sketch a revised dual process model in which the relative strength of different types of intuitions determines reasoning performance.

16:44
Revisiting Dual Process account of thinking

ABSTRACT. Conception of human thinking as either fast-intuitive (System 1) or slow-deliberative (System 2) is widespread across several disciplines. Dual process account of cognition has also been central to program on heuristics and biases in human judgement and decision making. Dissociation of intelligence and rationality paradigm by Kieth Stanovich and his colleagues established intelligence having only a weak to moderate relation with rationality. Further, they modified the dual process model and attributed human intelligence to System 2 thinking.

However, we still know very little about the processes of thinking and any model explaining this phenomenon is only hypothetical raising scope for further conceptualizations. In this regard, we question human intelligence as a manifestation of System 2 thinking and propose intelligence: computational mechanism underlying human thinking, as integral to both System 1 and System 2. This questions the widely accepted dual process account of thinking and challenges the dissociation of intelligence with rationality.

Computational theory of mind, theory of cognitive unconscious, dual process theory of intelligence and some of our empirical work on modes of processing and rationality, assists addressing some of these questions and our reconceptualization.

17:02
Understanding automatic and controlled intertemporal choice with a two-stage sequential sampling model

ABSTRACT. Dual process theories of decision making describe choice as the product of an automatic System 1, which is quick to activate but behaves impulsively, and a deliberative System 2, which is slower to activate but makes decisions in a rational and controlled manner. In this paper, we use this approach to analyze choice probabilities and response times (RTs) in intertemporal decisions. Consistent with the predictions of dual process theories, decision makers are quicker to choose immediate rewards compared to delayed rewards. On the individual level, the direction and magnitude of this tendency varies, but is correlated with standard measures of deliberative control, such as performance on the cognitive reflection task. We also fit our choice probabilities and RTs using a two-stage sequential sampling model, and find that this type of model is able to describe both aggregate and individual-level data. The best fitting model has a short stage 1 that appears to be insensitive to time delay, and a long stage 2 that takes into account both monetary payoffs and time delays.

15:50-17:20 Session 14B: Symposium
Location: Friedman Auditorium
15:50
Cognitive models of evidential reasoning
SPEAKER: David Lagnado

ABSTRACT. Many decisions we confront, whether in legal, medical, financial or everyday contexts, require us to interpret and reason about evidence. How do we do this, and how well? This symposium explores various cognitive models of evidential reasoning, including causal, coherence-based and dual-system approaches.

Explanation-based Judgments and Decisions Reid Hastie

Many judgments and decisions are based on mental models. These models often derive from beliefs about what causes what, and these causal relations are used to infer what has happened, what will happen, and how to intervene in order to bring about desired states of the future world. One format for these explanations (causal mental models) is a narrative story organized around the major actors, actions, and events. This talk will report on recent findings from research on narrative explanation-based thinking.

Towards a General Framework of Biased Reasoning. Dan Simon

We review a range of well-established psychological phenomena generally considered to constitute biased reasoning. We propose that these biases stem from the bidirectional nature of human reasoning, driven by parallel constraint satisfaction mechanisms. Reasoning processes flow both forwards (from premises to conclusions) and backwards (from conclusions to premises). While the former is deemed by convention to be normative, the latter constitutes biased reasoning. In this light, the biases in question can be understood as instantiations of backwards reasoning, which is an inevitable feature of the connectionist nature of cognition.

Coherence-Based Judgments and Decisions: A Parallel Constraint Satisfaction Account Andreas Glöckner

Judgment and decision-making requires an efficient integration of many pieces of information under time constraints and uncertainty. The underlying mechanisms can be described by coherence-based reasoning - a sense-making process in which coherent interpretations are constructed under parallel consideration of all constraints. These models have been criticized as underspecified and over-flexible. I review recent model developments that solve this problem, present results from empirical investigations that support core process assumption of these models, and report simulations that demonstrate the ecological validity of these models by showing their ability to approximate Bayesian solutions.

Shaping Moods to Influence Decisional Accuracy in Fact-Finding. Mark Spottswood

I will review the literature on the ways that incidental moods influence the extent to which System 1 or System 2 dominates a decision-making task. I will then consider the circumstances in which either system might produce greater outcome accuracy in the context of legal disputes. Next, I will explore some ways that the design of a fact-finding institution might tend to systematically influence the valence of fact-finding mood. Lastly, I will offer some tentative suggestions regarding the optimality of a few different fact-finding institutions in light of that analysis.

Discussant: David Lagnado

 

 

 

15:50-17:20 Session 14C: Talks
Location: Watson CIT 165
15:50
Indicative conditionals and the search for the semantics-pragmatics distinction

ABSTRACT. Sentences like “If Hilary Clinton is running for the president of the US, I ate a banana at least once in my life” strike us as odd, yet, according to the most predominant theories of conditionals, they should be evaluated as true or highly acceptable, provided that Hilary Clinton is actually running in the elections and I ate a banana. Most philosophers and psychologists of reasoning consider the intuition that antecedents and consequents should be somehow connected to be a pragmatic rather than a semantic phenomenon. However, no one has offered a satisfactory explanation of how pragmatics of conversation is supposed to account for this intuition. Moreover, the few psychological studies that introduced relevance manipulations in the design do not help to resolve the debate either. Not only because their authors obtained different results, but also because it is not entirely clear to what extent people’s responses in conditional reasoning tasks allow us to classify a phenomenon as semantic or as pragmatic. After all, the semantics-pragmatics distinction is a theoretical one. In my talk, I am going to discuss the limitations of the experimental method in telling the semantics and pragmatics of conditionals apart.

16:08
Compounds and iterations of conditionals in probabilistic reasoning under coherence
SPEAKER: David E. Over

ABSTRACT. There is wide formal and experimental support for the hypothesis that the probability of the indicative conditional of natural language, P(if A then C), is the conditional probability of C given A, P(C|A), where C|A denotes de Finetti's conditional event. An objection to making this identification was that it appeared unclear how to form compounds and iterations of conditional events. Yet people do use and reason with such compounds and iterations in natural language. Intuitive examples will be discussed in our talk, and we will show how to overcome the objection in the framework of coherence-based probability theory. We interpret the compounds and iterations as conditional random quantities, which may be conditional events when there are certain logical dependencies. We consider the fundamental inference of centering, which is inferring a conditional "if A then B" from the premises A and B. Psychologists of reasoning have only recently started to study this inference. We will show how to extend it to compounds and iterations of both conditional events and biconditional events, B||A, and generalise it to n-conditional events. In conclusion, we discuss to what extent our approach can serve as a new rationality framework for new paradigm psychology of reasoning.

16:26
Emotion and Reasoning: A Metacognitive Perspective

ABSTRACT. The talk will draw on the Thompson’s (2011) theory of metacognition in reasoning,

which aims to identify the mechanisms that trigger effortful (type 2) processing. We

will discuss the metacognitive perspective in relation to the role of emotional

content, a topic not yet integrated into the theory. We suggest that emotion serves

as a metacognitive cue to trigger effortful processing. We will present a conditional

inference task using fear-related versus neutral materials, matched for believability.

The task utilizes a simplified version of the two-response paradigm developed by

Thompson et al (2011). Participants provide a fast first response and feeling of

rightness (FOR) rating; this is then repeated with no time restriction. Changes

between the first and second response provide a measure of effortful thinking. The

findings suggest that emotion has a dual role. First, it moderates the effect of FOR:

FOR and response change only correlated for fear-related materials, an effect that

was replicated across items. Second, fear triggers low FOR, which then activates

effortful processing: FOR was lower overall for fear-related materials. This effect was

mediated by type of inference: for fear-related materials, participants changed their

responses more for the denial inferences (MT, DA) relative to the affirmation

inferences (MP, AC). The opposite was true for neutral materials. We discuss

whether the effect is task-specific.

16:44
Cognitive and affective indicators of implicit sensitivity to logic

ABSTRACT. Decades of research have highlighted that people are often biased when reasoning. On a more optimistic note, recent studies suggest that people are not totally blind to their biases. Two measures, namely confidence (De Neys, Cromheeke, & Osman, 2011) and liking (Morsanyi & Handley, 2012), have been used, which rely on quite different cognitive operations, to assess people’s implicit sensitivity to conflict and logical validity. The present research explores whether confidence and liking measures tap the same cognitive process or not. To this end, we explored how different factors well known to affect reasoning, that is validity, believability, and emotion, affected each of the two measures. Experiment 1 showed that the confidence measure was primarily sensitive to validity while Experiment 2 highlighted that the liking measure was sensitive to the three factors. Experiment 3 compared the two measures directly in the same sample. We present a structural equation model of confidence and liking sensitivities, showing that the two measures, although they are differently impacted by believability, share the same sensitivity to validity, and to a lesser extent to emotion. We discuss the possibility that the confidence and liking measures, despite their superficial differences, capture a single process underlying people’s logic detection.

17:02
Content effect in reasoning from an incompatibility: More evidence for a retrieval model
SPEAKER: Janie Brisson

ABSTRACT. Many studies have shown the great variability of human reasoning with inferences of the same logical form but differing in content. Markovits and Barouillet (2002) suggested that this content effect is attributable to the number of available counterexamples to a putative conclusion, these being retrieved from long-term memory. Since this model mainly relies on evidence concerning conditional reasoning, we propose to extend it by looking at inferences from a statement of incompatibility between two propositions. These inferences present two invalid forms, for which the logical response is one of uncertainty. We predicted that participants would produce a greater proportion of uncertainty responses to the invalid forms when more counterexamples are available. We used three classes of premises with an increasing number of available counterexamples, which have been previously identified in a pretest. Analyses showed the predicted pattern: for both invalid forms, the number of logically correct responses was greater for the premises with many available counterexamples than for the ones with a moderate amount of available counterexamples, which was more than the number for the premises with few counterexamples. These results provide additional evidence on the link between content effect and the ability to retrieve counterexamples from long-term memory.

15:50-17:20 Session 14D: Talks
Location: Barus and Holley 159
15:50
Rationality and normativity: The place of normative models in higher mental processing

ABSTRACT. We acknowledge generous sponsoring from the Priority Program New Frameworks of Rationality (SPP1516)

Duration: Double-length symposium

Symposium abstract: What is human thinking like? What ought human thinking be like? There is an unavoidable tension between these two questions, the is question and the ought question. For more than half a century, theorists highlighted the infamous normative-descriptive gap: that human thinking deviates from normative standards such as extensional logic and the probability calculus. But is the gap as wide as it seems, and should we care? The answer depends on which side on the rationality debate you are. Panglossians see both the is and the ought of human rationality as well-aligned: humans reason, make decisions, and judge moral action as they ought to. For Panglossians, the is and the ought are evolutionarily programmed to coincide: humans conform to normative Bayesian rules as a matter of basic survival. Meliorists, on the other hand, highlight the normative-descriptive gap, focusing on the discrepancies between behaviour and normative systems. For Meliorists, the is and the ought of rationality seldom match, or match only for a handful of gifted, motivated reasoners. What both Panglossians and Meliorists share is a normativist view point: They agree that rationality should be measured against normative standards. In contrast, for Descriptivists, questions of ought are beyond the pale of the psychology of thinking, not least because asking them invariably leads psychologists to draw the dubious inference from is to ought. Two recent discussion, in BBS and in Frontiers in Cognitive Science, identified several important issues such as the seemingly soft boundaries between normativism and descriptivism; the distinction between normative theories and computational-level theories; the distinction between different types of rationality (in particular normative versus instrumental); whether the Bayesian framework in cognitive science can be considered descriptive as well as normative; proposals for soft normativism; and the role of context and relativity of thinking. In this symposium we explore issues of rationality and normativity in human higher mental processing: reasoning, decision making, and moral judgement. The symposium addresses some of the most fundamental questions that the New Frameworks of Rationality priority program (http://www.spp1516.de/en/index.html) seeks to address: Are humans rational? If so, how? If not, why not? How do we judge? Much in line with the vision of the New Frameworks for Rationality, the symposium is interdisciplinary.

#2 session, Friday, 5th, 15:50-17:20 (1) Keith Stenning (University of Edinburgh): Discourse semantics and the practicalities of theoretical reasoning (25 min) (2) Niels Skovgaard Olsen (Konstanz University, Freiburg University): Conditionals and multiple norms conflicts (25 min) (3) Shira Elqayam (De Montfort University): Grounded rationality meets the Equation (25 min) (4) Roundtable discussion with Markus Knauff (15 min)

17:20-18:30 Session 15: Keynote - Location: Macmillan 117
17:20
Core Cognition and its Aftermath
SPEAKER: Lance Rips

ABSTRACT. A current and very influential theory in psychology holds that infants have innate, perceptual systems that endow them with surprisingly high-level concepts—for example, concepts of the mass/count distinction, of cardinality, and of causality. These initial core concepts then provide the building blocks (or starting points) for later adult ideas within these domains. This talk reviews the evidence for core cognition in these areas and argues that the infant concepts in question, if they exist, don’t have the right properties to explain how children learn their way to adult thoughts about language, number, and cause.