next day
all days

View: session overviewtalk overview

09:00-10:00 Session 2: Keynote I

Keynote I

Location: Aula KOL-G-201
Digital Humanitarians: How You Can Make a Difference During the Next Disaster

ABSTRACT. The overflow of information generated during disasters can be as paralyzing to humanitarian response as the lack of information. This flash flood of information -- often called Big Crisis Data -- comes in the form of social and mainstream media as well as aerial imagery and satellite imagery. Making sense of this data deluge is proving to be an impossible challenge for humanitarian organizations. This is precisely why they’re turning to Digital Humanitarians. Digital Humanitarians use crowdsourcing and artificial intelligence to make sense of this big crisis data. This talk will highlight real world examples of Digital Humanitarians in action from across the globe. It will also highlight the biggest challenges they face in crafting and using human and machine computing solutions to accelerate humanitarian efforts. Special emphasis will be placed on the challenge of analyzing aerial imagery captured by civilian drones during disasters as this is currently one of the main problems facing the humanitarian community. Members of the HCOMP and CI communities can play a pivotal role in helping humanitarian organizations accelerate their relief efforts worldwide. This talk will explain exactly how and will serve as a spring board for ongoing and future collaboration between HCOMP, CI and the humanitarian community.

11:00-12:30 Session 5: First paper session
Location: Aula KOL-G-201
Simon's Anthill: Mapping and Navigating Belief Spaces

ABSTRACT. In the parable of Simon's Ant, an ant follows a complex path along a beach on to reach its goal. The story shows how the interaction of simple rules and a complex environment result in complex behavior. But this relationship can be looked at in another way -- given path and rules, we can infer the environment. With a large population of agents -- human or animal -- it should be possible to build a detailed map of a population's social and physical environment. In this abstract, we describe the development of a framework to create such "maps" of human belief space. These maps are built from the combined trajectories of a large number of agents. Currently, these maps are built using multidimensional agent-based simulation, but the framework is designed to work using data from computer-mediated human communication. Maps incorporating human data should support visualization and navigation of the "plains of research", "fashionable foothills" and "conspiracy cliffs" of human belief spaces.

Finding Mnemo: Hybrid Intelligence Memory in a Crowd-Powered Dialog System

ABSTRACT. Crowd-powered conversational systems—in which ever-changing groups of remote human workers collectively hold a conversation with end users—can help bootstrap automated dialog systems by generating training data in real scenarios and succeed where well-trained automated approaches fail. However, since no one worker is present during all sessions, these systems fail to remember all relevant information from interactions that span multiple sessions, leading over time to the loss of conversational context. In this paper, we introduce Mnemo, a crowd-powered dialog system plug-in that uses collective processes and automated support to maintain a "collective memory" of user conversations through crowd- generated, future-relevant facts. We show that individuals can predict long term relevant facts about end users in a range of topics and present an analysis of worker errors. Finally, we combine individual worker performance with automated clustering methods that aggregate groups of workers to achieve a 33% improvement, and show that we can reach > 90% recall with just five workers.

Networks of influence in small group discussions

ABSTRACT. When making decisions, forming judgments, or solving multidimensional problems, people often engage in face-to-face discussions to exchange information and ideas. However, the conditions under which a group performs good or bad during such a discussion remain unclear. Recently, Woolley et al. revealed the existence of a ‘collective intelligence factor’ that is predictive of groups performance. That factor is not associated with the skills of the individual group members, but correlates with the social sensitivity of the individuals, that is, their ability to integrate the arguments of the others and to balance the speaking turns across all group members. This result suggests that group performance is mostly determined by the dynamics that operates during the discussion, that is, the pattern of communication and the social influences processes. Here, we investigate this dynamics by means of simulations. For this, we describe the group as a small social network in which each group member is represented by a node, and all the nodes are connected to one another by weighted ties representing the extent to which individuals influence each other. How does the structure of the network determine the group performance? Can groups of people converge to the optimal structure by means of social learning?

CrowdRev: A platform for Crowd-based Screening of Literature Reviews

ABSTRACT. In this paper and demo we present a crowd and crowd+AI based system, called CrowdRev, supporting the screening phase of literature reviews and achieving the same quality as author classification at a fraction of the cost, and near-instantly. CrowdRev makes it easy for authors to leverage the crowd, and ensures that no money is wasted even in the face of difficult papers or criteria: if the system detects that the task is too hard for the crowd, it just gives up trying (for that paper, or for that criteria, or altogether), without wasting money and never compromising on quality.

The Effect of Automatic Feedback on Effort and Collective Intelligence in Distributed Virtual Teams

ABSTRACT. One of the biggest challenges in distributed virtual team collaboration is for members to know if others are withholding effort or contributing equally to shared tasks [Alnuaimi, Robert Jr., and Maruping, 2010]. Researchers suggest that withholding effort is more likely to occur in such teams due to the limited shared social context, relative anonymity and the inability to observe other members’ effort and performance [Chidambaram and Tung, 2005]. However, current technology allows more opportunities for visibility and synchronicity of distributed teamwork, including the ability to visually reflect the relative effort of each team member. Visual feedback has been found to positively affect team communication [DiMicco, Hollenback and Bender, 2006], however its’ impact on team effort and consequently team performance is still not clear. To test the effect of visual automatic feedback on team effort and performance 335 MBA students were randomly assigned to 87 distributed virtual project teams who completed the Test of Collective Intelligence [TCI; Kim et al, 2017]. Teams were randomly assigned to one of two conditions: with or without visual feedback about member effort. The results indicate that the visual feedback of relative effort positively contributes to team effort and, consequently, team performance.

Enabling Expert Critique with Chatbots and Micro Guidance

ABSTRACT. Feedback is essential to creative work. Creators can receive many kinds of feedback to their work, from informal reactions/kudos, to more detailed, critical analyses that can significantly improve creative work. Experts have historically provided critique to creative work within physical studios by being directly collocated with creators, whom they had never met before. However, getting experts and creators together at the same time in one same physical space is hard. Experts generally have limited time, complex schedules and are distributed across the globe. To enable creators at scale access to expert critique we introduce MATT, a chatbot that guides crowds of experts to critique the creative work, especially of novices starting to create designs.

14:00-15:00 Session 7: Keynote II

Keynote II

Location: Aula KOL-G-201
Optimizing the Human-Machine Partnership with Zooniverse

ABSTRACT. Citizen science - the involvement of hundreds of thousands of people in the research process  - provides a radical solution to the challenge of dealing with the greatly increased size of modern data sets. is the most successful collection of online citizen science projects which have enabled over 1.7 million online volunteers to contribute to over 120 research projects spanning disciplines from astronomy to zoology. University of Minnesota’s Dr. Lucy Fortson will briefly describe the Zooniverse platform and some of the results to date from the Zooniverse collection of online projects in the context of new approaches to combining machine learning with human classifications. She will then provide an overview of recent data science experiments with the ultimate goal of producing a system that most efficiently balances the human and machine classifications. She will finish with a short description of future developments of the Zooniverse platform.

15:30-17:00 Session 9: Second paper session
Location: Aula KOL-G-201
Social learning strategies for matters of taste

ABSTRACT. Most choices people make are about "matters of taste" on which there is no universal, objective truth. Nevertheless, people can learn from the experiences of individuals with similar tastes who have already evaluated the available options---a potential harnessed by recommender systems. We mapped recommender system algorithms to models of human judgment and decision making about ``matters of fact'' and recast the latter as social learning strategies for "matters of taste." Using computer simulations on a large-scale, empirical dataset, we studied how people could leverage the experiences of others to make better decisions. Our simulation showed that experienced individuals can benefit from relying mostly on the opinions of seemingly similar people; inexperienced individuals, in contrast, cannot reliably estimate similarity and are better off picking the mainstream option despite differences in taste. Crucially, the level of experience beyond which people should switch to similarity-heavy strategies varies substantially across individuals and depends on (i) how mainstream (or alternative) an individual's tastes are and (ii) the level of dispersion in taste similarity with the other people in the group.

Collective Intelligence for Deep Reinforcement Learning

ABSTRACT. We draw upon the literature on collective intelligence as a source of inspiration for improving Deep Reinforcement Learning (DRL). Implicit in many algorithms that attempt to solve DRL tasks is the network of processors along which parameter values are shared. So far, existing approaches have implicitly utilized fully-connected networks, in which all processors are connected. However, the scientific literature on collective intelligence suggests that complete networks may not always be the most effective information network structures for distributed search through complex spaces. To our knowledge, no work has explored theoretically and experimentally how the network topology of communication between learning agents affects deep reinforcement learning - and machine learning in general. In this work, we introduce the notions of ensembles, network topology and independent node-level agent updates to the Evolution Strategies algorithm - chosen as it allows for large numbers of parallel learning agents. We show that to sample the search space efficiently (parametrized by the variance of parameter updates), agents need to communicate within certain families of sparse network topologies. We then experimentally observe that such sparser networks indeed significantly improve performance in a variety of hard DRL benchmarks. Hence, our collective intelligence-enhanced variant of the ES algorithm requires less communication, learns higher rewards and is decentralized.

Photo Sleuth: Combining Collective Intelligence and Computer Vision to Identify Historical Portraits

ABSTRACT. Identifying people in photographs is a critical task in a wide variety of domains, from national security to journalism to human rights investigations. Yet, this task is challenging for both state-of-the-art computer vision approaches and expert human investigators. Automated face recognition technologies are powerful tools, but can be limited by real-world constraints such as poor quality imagery, exclusion of relevant distinguishing features, and high false positives.

We propose an innovative solution to overcome these constraints by augmenting face recognition with crowdsourced human visual capabilities, aiming to improve "last mile" analysis where users must carefully analyze many high-quality candidates. We applied this approach to the challenge of identifying people in historical photographs, specifically American Civil War soldier portraits, which offer rich visual clues and compelling motivations for identification.

Our solution is a novel five-step software pipeline, built on the foundation of a website called Civil War Photo Sleuth, that leverages the complementary strengths of collective intelligence and computer vision. A user seeking to identify a soldier portrait categorizes visual clues with tags that are linked to a reference database of 15,000 identified soldier photos and corresponding military service records. These tags serve as filters to narrow the search results, which are further narrowed and sorted by face recognition. Finally, a crowdsourcing workflow searches the most similar candidate photos for distinctive features identified by the user, who can review a shortlist and confirm the identification.

This project promises to transform historical research of visual material and build the foundation for significantly more effective techniques for modern person identification in national security and law enforcement contexts.

Toward Safer Crowdsourced Content Moderation

ABSTRACT. While most user-generated content posted on social media platforms is benign, some image, video, and text posts violate terms of service and/or platform norms (e.g. due to nudity, obscenity, etc.). At the extreme, such content can include child pornography and violent acts, such as murder, suicide, and animal abuse. Ideally, algorithms would automatically detect and filter out such content, and machine learning approaches toward this end are certainly being pursued. Unfortunately, algorithmic accuracy remains today unequal to the task, thus making it necessary to fall back on human labor. While social platforms could ask their own users to help police such content, such practice is typically considered untenable since these platforms want to guarantee their users a safe, protected Internet experience within the confines of their curated platforms.

Consequently, the task of filtering out such content often falls today to a global workforce of paid human laborers who are willing to undertake the job of commercial content moderation to flag user-posted images which do not comply with platform rules. To more reliably moderate user content, social media companies hire internal reviewers, contract specialized workers from third parties, or outsource to online labor markets. While such work might be expected to be unpleasant, there is increasing awareness and recognition that long-term or extensive viewing of such disturbing content can incur significant health consequences for those engaged in such labor, somewhat akin to working as a 911 operator in the USA, albeit one with potentially less institutional recognition and/or support for the detrimental mental health effects of the work. It is rather ironic, therefore, that precisely the sort of task one would most wish to automate (since algorithms could not be ``upset'' by viewing such content) is what the ``technological advance'' of Internet crowdsourcing is now enabling: shifting such work away from automated algorithms to more capable human workers.

In a court case scheduled to be heard at the King County Superior Court in Seattle, Washington in October 2018, Microsoft was sued by two content moderators who said they developed post-traumatic stress disorder. Recently, there has been an influx in academic and industry attention to these issues, as manifest in conferences organized on content moderation. While this attention suggests increasing awareness and recognition of professional and research interest on the work of content moderators, few empirical studies have been conducted.

In this work, we aim to answer the following research question: How can we reveal the minimum amount of information to a human reviewer such that an objectionable image can still be correctly identified? Assuming such human labor will continue to be employed in order to meet platform requirements, we seek to preserve the accuracy of human moderation while making it safer for the workers who engage in this. Specifically, we experiment with blurring entire images to different extents such that low-level pixel details are eliminated but the image remains sufficiently recognizable to accurately moderate. We further implement tools for workers to partially reveal blurred regions in order to help them successfully moderate images that have been too heavily blurred. Beyond merely reducing exposure, putting finer-grained tools in the hands of the workers provides them with a higher-degree of control in limiting their exposure: how much they see, when they see it, and for how long.

Preliminary pilot data collection and analysis on Amazon Mechanical Turk (AMT), conducted as part of a class project, asked workers to moderate a set of "safe" images, collected judgment confidence, and queried workers regarding their expected emotional exhaustion or discomfort were this their full time job. We have further refined our approach based on these findings and next plan to proceed to primary data collection, which will measure how degree of blur and provided controls for partial unblurring affect the moderation experience with respect to classification accuracy and emotional wellbeing.

Content-based pornography and nudity detection via computer vision approaches is a well-studied problem. Violence detection in images and videos using computer vision is another active area of research. Hate speech detection is another common moderation task for humans and machines.

Crowdsourcing of privacy-sensitive materials also remains an open challenge. Several methods have been proposed in which workers interact with obfuscations of the original content, thereby allowing for the completion of the task at hand while still protecting the privacy of the content's owners. Examples of such systems include those by Little and Sun [2011], Kokkalis et al. [2013], Lasecki et al. [2013], Kaur et al. [2017], and Swaminathan et al. [2017]. The crowdsourcing of obfuscated images has also been done in the computer vision community for the purpose of annotating object locations and salient regions.

Our experimental process and designs are inspired by Das et al. [2016], in which crowd workers are shown blurred images and click regions to sharpen (i.e., unblur) them, incrementally revealing information until a visual question can be accurately answered.

We have collected images from Google Images depicting realistic and synthetic (e.g., cartoons) pornography, violence/gore, as well as "safe" content which we do not believe would be offensive to general audiences. We manually filtered out duplicates, as well as anything categorically ambiguous, too small or low quality, etc., resulting in a dataset Table 1 shows the final distribution of images across each category and type (i.e., realistic, synthetic).

Rather than only having workers indicate whether an image is acceptable or not, we task them with identifying additional information which could be useful for training automatic detection systems. Aside from producing richer labeled data, companies may also require moderators to report and escalate content depicting specific categories of abuse, such as child pornography. At the same time, we wish to protect the moderators from such exposure. We design our task as follows.

Our HIT is divided into two parts. The first part is the moderation portion, in which workers are presented images to classify as belonging to the categories in Section 3.1. We use this set-up for six stages of the experiment with minor variations. Stage 1: we do not obfuscate the images at all; the results from this iteration serve as the baseline. Stage 2: we blur the images using a Gaussian filter with standard deviation sigma=7. Stage 3: we increase the level of blur to sigma=14. Figure 1 shows examples of images blurred at sigma={0, 7, 14}. Stage 4: we again use sigma=14 but additionally allow workers to click regions of images to reveal them them. Stage 5: similarly, we use sigma=14 but additionally allow workers to mouse-over regions of images to temporarily unblur them. Stage 6: workers are shown images at sigma=14 but can decrease the level of blur using a sliding bar.

By gradually increasing the level of blur, we reveal less and less information to the worker. While this may better protect workers from harmful images, we anticipate that this will also make it harder to properly evaluate the content of images. By providing unblurring features in later stages, we allow workers to reveal more information, if necessary, to complete the task.

We also ask workers to take a survey about their subjective experience completing the task. The survey contains question measuring various variables including positive and negative experience and affect, emotional exhaustion, and perceived ease of use/perceived usefulness of the blurring interface. As our goal is to alleviate the psychological burden which may accompany content moderation, these measures will help us evaluate the extent to which obfuscating images successfully relieves workers.

By designing a system to help content moderators better complete their work, we seek to minimize possible risks associated with content moderation, while still ensuring accuracy in human judgments. Our experiment will mix blurred and unblurred adult content and safe images for moderation by human participants on AMT. This will enable us to observe the impact of obfuscation of images on participants' content moderation experience with respect to moderation accuracy, usability measures, and worker comfort/wellness. Our overall goal is to develop methods to alleviate potentially negative psychological impact of content moderation and ameliorate content moderator working conditions.

How Intermittent Breaks in Interaction Improve Collective Intelligence

ABSTRACT. People influence each other when they interact to solve problems. Such social influence introduces both benefits (higher average solution quality due to exploitation of existing answers through social learning) and costs (lower maximum solution quality due to a reduction in individual exploration for novel answers) relative to independent problem solving. In contrast to prior work, which has focused on how the presence and network structure of social influence affect performance, here we investigate the effects of time. We show that when social influence is intermittent, it provides the benefits of constant social influence without the costs. Human subjects solved the canonical travelling salesperson problem in groups of three, randomized into treatments with constant social influence, intermittent social influence, or no social influence. Groups in the intermittent social influence treatment found the optimum solution frequently (like groups without influence) but had a high mean performance (like groups with constant influence); they learned from each other, while maintaining a high level of exploration. Solutions improved most on rounds with social influence after a period of separation. We also show that storing subjects’ best solutions so that they could be reloaded and possibly modified in subsequent rounds -- a ubiquitous feature of personal productivity software -- is similar to constant social influence: it increases mean performance but decreases exploration.

17:00-18:00 Session 10: First poster session

Posters 1

Changing the Innovation Game - Crowdsourcing in Incumbent Firms

ABSTRACT. Crowdsourcing is an emergent interdisciplinary theory and methodology, which in recent years has become widely diffused, raising significant questions in the innovation management literature concerning the adoption of crowdsourcing as an open innovation practice. Building on seminal research on open innovation adoption in large organizations (Chesbrough and Brunswicker, 2013; 2014), the current study presents qualitative findings on the innovation practices and strategies of incumbent firms transitioning from traditional innovation to crowdsourcing for open innovation. We discuss the impact of crowdsourcing technologies and methodologies on 1) the innovation processes, 2) the innovation content and 3) the overall scope of innovation to discover different stages of maturity in the innovation governance structures of incumbent firms (Deschamps & Nelson, 2014). Our study thus contributes to research on the firm side of crowdsourcing, providing much needed insights into the processes, procedures and structures that support the implementation of crowdsourcing for open innovation (Lüttgens et al., 2014).

A Human-Centered Perspective on Human–AI Interaction: Introduction of the Embodiment Continuum Framework

ABSTRACT. Artificial Intelligence (AI) can be seen as a new generation of machines capable of interacting with the environment by (a) gathering information from outside (including from natural language) or from other computer systems; (b) interpreting this information, recognizing patterns, inducing rules, or predicting events; (c) generating results, answering questions; or giving instructions to other systems; and (d) evaluating the results of their actions and improving their decision systems to achieve specific objectives [Ferràs-Hernández, 2017]. This broad technology-based definition does not refer to the form in which a user interacts with the technology; however, we are beginning to realize that the physical features of the AI agent are critical for understanding human-AI interaction. Based on our literature review, we develop a framework for understanding AI agency as a continuum of embodiment, starting with fully embedded AI (with no identifiable representation) and ending with a physically separate robotic entity operated by AI. Putting AI representation on a continuum of embodiment allows us, on the one hand, to include different types of AI with similar technological features, and on the other hand to address AI from the human-centered perspective. Furthermore, this framework contributes to a better understanding of the impact of AI features on the psychological mechanisms that shape human-AI interaction. Focusing on cognitive and emotional trust as the main underlying mechanisms that drives cooperation, collaboration and compliance with AI, and reviewing the existing empirical research on human-AI interaction, this study aims to introduce the embodiment continuum framework and to provide directions for future human-centered research on AI.

Crowdoscope – An Interactive Survey Tool for Social Collective Intelligence

ABSTRACT. Traditional approaches to online opinion research can be problematic. In terms of qualitative research, discussion forums that present comments in lists do not scale well for large groups of people. Not only do they lead to information overload, they also have trouble ensuring that all comments receive equal attention (Faridani, Bitton, Ryokai, & Goldberg, 2010). Regarding quantitative research, online surveys have scalability, but they can often be tedious for participants to complete. Worse still, because there is no interaction between participants in a survey, an opportunity is being missed to capture Social Collective Intelligence. This is a form of insight that emerges “where social processes between humans are being leveraged and enhanced, by means of advanced Information and Communication Technologies.” (Miorandi, Rovastos, Stewart, Maltese & Nijholt, 2014 p. v). In order to solve some of the problems associated with conventional surveys and discussion forums, we present Crowdoscope: a visual and interactive opinion research tool for obtaining the Social Collective Intelligence of large groups of people. Incorporating ideas from deliberative polling, collaborative filtering and data visualisation as a user interface, Crowdoscope is a self-organising visual environment that can support an unlimited number of participants.

Market volatility and crashes in experimental financial markets with interactions between human and high-frequency traders

ABSTRACT. This work investigates whether the perception of high-frequency trading (HFT) by human participants may change their trading behavior and how the potential presence of HFT in financial markets may affect the trading behavior of human participants and ultimately market dynamics in order to contribute to the current debate on the impact of HFT on market dynamics and efficiency. In particular, we consider two different types of trading strategies commonly employed by high frequency (HF) traders, namely layering/spoofing, that has been identified as a deceptive activity and associated with market manipulation; and market-making, that may have a beneficial effect on market quality. We run artificial trading experiments using an electronic continuous double auction based on five treatments to disentangle the effect of each type of computer traders and of their (potential) presence. From these experiments, we find that the (potential) presence of computer traders seems: i. to attenuate the emergence of “bubble-and-crash" patterns; ii. to mitigate asset prices volatility over time; iii. to change human subjects behavior; iv. to prevent human subjects from learning the fundamental value of the asset over time. Overall, besides shedding further light on the impact of HFT on financial markets and market dynamics, these findings may help regulators in creating a level playing field for all market participants, reinforcing market integrity and transparency, and guaranteeing equal treatment of market participants.

Co-creating Collective Intelligence in Civic Tech: Pilot Study in Lithuania

ABSTRACT. While the traditional approaches to a public engagement and the governmental reforms remain relevant, this research paper focuses towards the growing potential of networked society to solve their social problems. The field of ICT enabled Civic Technologies platforms (or Civic Tech) is growing annually 23%. Around the world, the civic organizations, individual citizens and even businesses experiment with the ICT tools and available open resources to collaborate with each other and with the government to find the innovative solutions for societal problems.To support this, the international scientific society publishes the research results about the creative power of networked systems and their potential to grow “collective intelligence”.The main task of the pilot study, implemented in Lithuania 2017, was to develop a conceptual framework to reflect the co-creation processes in Civic Tech by considering the Civic Tech platforms as Collective Intelligence Ecosystems.

Crowd Dynamics in Small Teams in Higher Education

ABSTRACT. This article aims to empirically study “crowd” dynamics in small teams and complement the work presented in Tucci et al. [2016], and Tucci and Viscusi [2017]. As in previous papers, we use as theoretical lens the framework and related typology of “crowd” dynamics discussed in Viscusi and Tucci [2015; 2018]. The framework considers the number of participants a sufficient, but not necessary condition for crowdsourcing, and distinguishes different types of crowd dynamics according to their growth tendency, degree of seriality and the intervening role of properties such as, e.g., density, equality, and goal orientation for distinguishing the distribution of agents within and between the different types of “crowds”, namely between communities, open crowds (multitudes [Virno 2004] e.g., Twitter users), closed crowds (controlled by intermediaries, such as, e.g., Innocentive that restrict growth and provide self-established boundaries), groups as crowd crystals, potentially leading to any of the others. Furthermore, as in previous papers, another goal of the study is to provide the setting for experiments in business domains to investigate how crowd characteristics may lower or increase “crowd capital,” here defined as the total number of crowd units having a demonstrated effectiveness in idea generation or task achievement [Tucci et al. 2016]. This definition adopts a more outcome-oriented perspective compared to other definitions emerging from this research stream [Lenart-Gansiniec 2016]; thus, our definition complements the conceptualization by Prpić & Shukla [2013, p.35035] and Prpić et al.[2015]. Finally, the article aims to contribute to the research on coordination in temporary groups [Valentine and Edmondson 2014] as well as on how dynamically assemble and managing paid experts from the crowd through flash teams [Retelny et al. 2014].

Is Novelty an Advantage or a Drawback in Equity Crowdfunding?

ABSTRACT. In recent years crowdfunding has diversified and grown beyond most experts' projections. Originally aiming to serve venture ideas and entrepreneurs outside the focus of traditional capital markets, the crowdfunding marketplace has developed a complicated relationship with novel ideas. Yet, there is little to no research on the relationship between project novelty and success in crowdfunding. This paper measures the novelty of crowdfunding campaigns using the content and language of their pitches, capturing their tendency to combine different venture sectors and topics in distinctive ways. Using a unique data set that covers four years of activity on a leading equity crowdfunding platform, we investigate the link between novelty and success, as well as how novelty appeals to different kinds of investors. We find that novelty derived from campaign pitches is negatively related with fundraising success even when controlling for quality and style of writing. We also find that novel campaigns are more likely to attract less-frequent, large-sum investors. Our findings contribute to the long-standing debate related to the trade-offs between innovativeness and conventionality in maximizing chances of startup survival. Our results also have important implications for entrepreneurs writing fundraising pitches and for platform providers who wish to facilitate successful innovation.

Crowd Work on a CV? Understanding How AMT Fits into Turkers' Career Goals and Professional Profiles
SPEAKER: Saiph Savage

ABSTRACT. To understand how workers currently view their experiences on AMT, and how they publicly present and share these experiences in their professional lives, we conducted a survey study with workers on AMT (n=98). The survey we administered included a combination of multiple choice, binary, and open-ended (short paragraph) items gauging Turkers' perceptions of their experiences on AMT within the context of their broader work experience and career goals. This work extends existing understandings of who crowd workers are and why they crowd work by seeking to better understand how crowd work factors into Turkers' professional profiles, and how we can subsequently better support crowd workers in their career advancement. Our survey results can inform the design of better tools to empower crowd workers in their professional development both inside and outside of AMT.

19:00-23:00 Banquet

CI 2018 / HCOMP-18 Joint Banquet