ATLC 2015: ATLANTA CONFERENCE ON SCIENCE AND INNOVATION POLICY
PROGRAM FOR FRIDAY, SEPTEMBER 18TH
Days:
previous day
next day
all days

View: session overviewtalk overview

08:30-10:00 Session 11A: The Value of Innovation
08:30
Partners for Exploration and Partners for Exploitation in the R&D Value Chain: An Event Study of Joint Patenting in the Pharmaceutical Industry

ABSTRACT. __Introduction__ Joint patenting is regarded as a last option to share intellectual property among co-applicants by legal experts. Both the number and share of jointly-owned patents have, however, increased in the recent years. Thus a question arises, whether joint patenting brings co-applicants any benefits that could outweigh the costs associated with the joint-ownership of patents. There exists only a limited number of studies that pay attention to the implication of joint-patenting to the behaviour and performance of firms (Fier et al., 2012; Hagedoorn et al., 2003; Khoury et al., 2011; Kim et al., 2007). There are no studies that have examined the impacts of joint-patenting to the firm’ value directly, to the best knowledge of the author. To fill the void, this study analyses pharmaceutical patents to examine firm-level performance implication of joint-patenting. It analyses how the attributes of co-applicants (i.e. R&D partners) determine the direction and level of those impacts from the viewpoint of organizational ambidexterity (Tushman et al., 1996).

__Research Methodology, Data and Sample__ This study uses a standard event study methodology based on the market adjustment return model to measure the market response to a series of patenting events (e.g. filing, registration, expiration, etc.), which patents undergo during their life cycle. Particularly, this study uses the cumulative abnormal return (CAR) that is summed across the three-day event window as a proxy of firm performance. This study uses pharmaceutical patents applied between 1980 and 1995 granted by the Japanese Patent Office. It uses the IPC class A61K to identify pharmaceutical patents (Schmoch, 2008). The number of collected patents is 8,448, among which jointly-owned patents are 794, equivalent to 9.4% of the total, which is comparable to the literature (Hagedoorn, 2003; Walsh et al., 2009). This study examines the impact of the manufacturing approval of new drugs to firms’ market performance, too, since manufacturing approval is an important event in the pharmaceutical R&D,. As for the financial data, event study methodology requires the stock prices of publicly traded firms and a composite index to estimate the normal market returns. This study collects the stock prices listed in the Tokyo Stock Exchange and Nikkei225 Index, a composite index of the stock prices of the 225 most-actively-traded stocks listed in the Tokyo Stock Exchange’s First Section. It collects oft-used financial information such as firms’ net sales and R&D expenditures for analysis as well. In addition, the information about M&A activities in the pharmaceutical industry is collected through firms' IR information and news database and incorporated prior to analysis.

__Analysis and Results__ Jointly-owned patents increase the firms’ stock market performance and that those impacts are larger than normal sole-owned patents. For example, the CAR for the registration of jointly-owned patents is positive (0.38%) and significant (t (558) = 2.55, p < 0.01). Next, this study uses the OLS regression with robust standard errors to further examine the direction and level of the impacts of the attributes of co-applicants to the firm’s performance. CAR (%) is used as the dependent variable. Three independent variables are used to capture the attributes of co-applicant of joint-patents. Specifically, Co-applicant's cumulative patents is used to measure its R&D capacities. It is the 10-year cumulative sum of pharmaceutical patents, discounted at 15% annually. Co-applicant's Log (Sales) is used to measure its size. It is the logarithm of net sales (1M yen) in the previous year. Co-applicant's CAR (%) is used as well, to assess the dispersion effect of co-applicant's performance to the focal firm's performance. Fifteen control variables and year dummy variables are also included in a full model to control for the attributes of the focal firms, individual patents, and unobserved temporal factors. Table 1 shows the juxtaposed estimation result of the full model (not included in the abstract submission). Among the results, the co-applicant’s R&D capacities has positive impacts to the focal firm’s performance, while its size has negative impacts. Together with the fact that firms’ R&D capacities and size are highly correlated in the pharmaceutical industry and that the industry is a science-based industry where basic research plays an important role in innovation (Marsili, 2001), the results can be interpreted that the R&D alliances with large firms have both positive impacts where scientific capacities matters (basic and applied sciences) and negative impacts where commercialization efforts are important (clinical trials for manufacturing approval) in terms of firms' market performance. The results highlight the tension between exploratory and exploitative activities in the matrix of the R&D value chain and partner selection in the pharmaceutical R&D.

08:45
Organizational Innovation for Product Imitation and Innovation -- Evidence from Chinese Manufacturing Firms
SPEAKER: unknown

ABSTRACT. In view of the importance of organizational innovation in improving firm competitiveness, this paper investigates the impact of organizational innovation on the performance of product imitation and innovation for Chinese manufacturing firms. 

The impact of organizational innovation on firm competitiveness has been demonstrated by several studies (Damanpour, 1991; Kimberly and Evanisko, 1981; Damanpour and Aravind, 2012). These studies mainly identify organizational innovation as the facilitator for the effective use of technology and an intermediate source of competitive advantage (Camison and Villar-Lopez, 2014). The effect of organizational innovation on productivity, lead times, and flexibility has been acknowledged using case studies (Mol and Birkinshaw, 2009). 

The term “organizational innovation”, however, is not clearly interpreted and measured. The difficulty lies in lacking of an appropriate indicator or approach to capture multi-dimensional factors which organizational innovation incorporate, such as structure, procedure, and intra-organizational innovation (Armbruster et al., 2008). Hence, the organizational innovation per se is still under-explored, so does its connection with the performance of firms. Furthermore, few studies identifies the different roles of organizational innovation in promoting imitation and innovation. 

The paper presents empirical evidence from a survey of 3,342 Chinese firms in manufacturing industries. All sample firms hold Technological Development Centers (TDC), authorized by either national or local government in 2009. In this survey, we define the process of organizational innovation from five aspects: organization of work, organization of production, knowledge management, payment schemes, and hu- man resource management. Latent class analysis (LCA) is applied to estimate the performance of organizaitonal innovation for sampling firms. The multinomial treatment effect (MTE) model is used to identify the effect of organizational innovation on product imitation and product innovation. The combination of this two methods excludes the endogeneity between organizational innovation and product innovation, therefore generating better-performed estimators.

 The research findings can be summarised as follows. (1) LCA endogenously gen- erates five types of organizational innovation for Chinese firms, ranging from lowerlevels to advanced levels. Based on their performance in R&D investment, product in- novation, and profit performance, these five-class membership can be tagged as R&D investor (type I), process innovator (type II), balanced development (type III), R&D and organizational innovator (type IV), and jack of all trades (type V). 

(2) MTE results imply a synergetic development between organizational innovation and product innovation. On the one hand, the higher levels of organizational innovation improves the imitation intensity, especially type III and type IV firms demonstrate significant higher imitation intensity compared to type I firms, implying the important role of knowledge innovation and R&D investment in facilitating product imitation. On the other hand, only the superior level of organizational innovation (type V) shows a significantly impact on improving the product innovation intensity. 

(3) These results suggest that different degree of organizational innovation corresponds to corporate strategies in imitation or inovation. The transition Chinese firms from imitation to innovation requires a comprehensive improvement in organizational innovation. 

This study enriches the micro-level evidence on organizational innovation and firm performance by analyzing a large sample of Chinese manufacturing firms. It offers a better understanding on the interaction of organizational innovation, product innovation, and firm strategies. It then generates two policy implications addressing on indigenous innovation. 

References
Armbruster, H., Bikfalvi, A., Kinkel, S. and Lay, G. (2008). Organizational innovation: The challenge of measuring non-technical innovation in large-scale surveys, Technovation 28(10): 644–657. 
Camison, C. and Villar-Lopez, A. (2014). Organizational innovation as an enabler of technological innovation capabilities and firm performance, Journal of Business Research 67(1): 2891 – 2902. 
Damanpour, F. (1991). Organizational Innovation: A Meta-Analysis of Effects of Determinants and Moderators, The Academy of Management Journal 34(3): 555– 590. 
Damanpour, F. and Aravind, D. (2012). Managerial Innovation: Conceptions, Processes, and Antecedents, Management and Organization Review 8(2): 423–454. 
Mol, M. J. and Birkinshaw, J. (2009). The sources of management innovation: When firms introduce new management practices, Journal of Business Research 62(12): 1269 – 1280. 2

09:00
R&D Investment and Related Policies as Determinants of Green Patenting: A Cross-National Assessment

ABSTRACT. This paper assesses green patenting at the country level and with a focus on two crucial determinants: research policy support and R&D collaboration. More specifically, do the relevant policies in a particular country increase green patenting, and how does this compare with/manifest as within-country and between-country green patenting? This comparison is an important one, as the connections between research policies and research collaboration in terms of green R&D output are still not clear, but it is expected that collaboration will be strongly associated with government research subsidies and carbon emissions-related policies. The green patenting measures presented in this paper are consistent with a large body of research which taps patent-based analysis, such as Griliches et al., (1990), Hall et al., (2002), and Schmookler (1966). Specifically, green patents are represented by the number of approved patents in accordance with the UPSTO’s environmentally sound technologies index, and such patents are counted by country and year from 1975 to 2013. By focusing specifically on green R&D, this approach effectively identifies communities of scientists and engineers producing green R&D, not unlike Dechezleprêtre et al.’s (2011) use of EPO patent data to show the dissemination of green technology across the world. A Poisson fixed-effects model is first used to estimate the effects of policies on patents, based on longitudinal data from the USPTO, OECD, and EIA. We then examine the network structure of green patenting collaboration in order to determine levels of collaboration both within and between countries. Finally, we triangulate both sets of findings in order to show conclusively how both contribute to overall patent output, both patents in general (approximately 57,000) and instances of collaboration (approximately 78,000).

09:15
Back to Basics: Heterogeneity in Scientific Disclosure and Firm Value in the Semiconductor Industry
SPEAKER: unknown

ABSTRACT. This paper quantifies the economic returns of corporate publishing in scientific literature by U.S. semiconductor firms. Even though firms risk information leakage and the facilitation of imitation, there is ample evidence of firms actively contributing to public knowledge through the open disclosure of scientific knowledge. However, there is little research that considers how publishing affects firms’ value. We attempt to fill this gap through a market value approach (Griliches, 1981; Jaffe, 1986). While earlier work that focused on R&D investments and (citation-weighted) patents as measure of the knowledge capital (Hall et al., 2005; Belenzon, 2012), we further this line of research to include articles in scientific journals as novel form of knowledge assets, in line with the recent study by Simeth and Cincera (2013).We additionally examine heterogeneity in the nature of the knowledge disclosed in publications by examining how basic and applied science differently affect valuation. Much of the discussion so far has assumed that science is basic by definition, while a large part of publications cover research which is closer to applications (Kline and Rosenberg, 1986). We explicitly distinguish between publications in journals which are basic in nature, i.e. follow a quest for fundamental understanding, versus more applied journals which contain knowledge produced with application in mind. Our empirical analysis is based on listed firms in the United States which report semiconductors as their main business line (four-digit NAICS 3674) between 1980 and 2007. As many previous studies have focused on the life sciences, the semiconductor industry makes for an interesting case with less direct reliance on science for product development but strong reliance for fundamental process (Breschi & Catalini, 2010; Cohen, Nelson, & Walsh, 2002). We infer scientific output through publications in Thompson Reuters Web of Science, and make use of the journal classification proposed by Hamilton (2003) to tell apart basic and applied research outputs. We find a negative relation between scientific publications and the valuation of intangible assets. This contrasts earlier evidence that found a positive relationship using a broader set of high-tech industries (Simeth & Cincera, 2013). This can stem from the low appropriability of research and high incidence of patenting for intellectual protection that is particularly true for semiconductor industry (Agarwal, Ganco, & Ziedonis, 2009; Ziedonis, 2003; Hall and Ziedonis 2001). However, basic publications have a strong positive relationship with value. We then investigate two possible drivers of this positive relationship: basic science contributing to better inventions, and basic science attracting inventors from academe. While we cannot formally test which explanation holds using our data, the preliminary evidence provided in this article speaks in favor of basic publications as signal of rigorous research environment to academic inventors. References Agarwal, R., Ganco, M., Ziedonis, R. 2009. Reputations for toughness in patent enforcement: implications for knowledge spillovers via inventor mobility. Strategic Management Journal, 30(13): 1349-1374. Breschi, S., Catalini, C. 2010. Tracing the linkages between science and technology: an exploratory analysis of the research networks among scientists and inventors. Research Policy, 39(1): 14-26. Fabrizio, K.R. 2009. Absorptive capacity and the search for innovation. Research Policy, 38(2): 255-267. Griliches, Z. 1981. Market value, R&D, and patents. Economic Letters, 7(2): 183-187. Hamilton, K. 2003. Subfield and level classification of journals. CHI report 2012-R. Chi Research Inc. Kline, S.J., Rosenberg, N. 1986. An overview of innovation. In Landau, R., Rosenberg, N. (eds.), The Positive Sum Strategy: Harnessing Technology for Economic Growth: 275-305. Washington, D.C.: National Academy Press Ziedonis, R. 2003. Patent litigation in the US semiconductor industry. In: Cohen, W., Merrill, S. (eds.). Patents in the knowledge-based economy: 180-218, National Academic Press. Hall, B., Ziedonis, R. (2001). The patent paradox revisited: an empirical study of patenting in the U.S. semiconductor industry, 1979-1995. RAND Journal of Economics, 32(1), 101-128. Jaffe, A. 1986. Technological opportunity and spillovers of R&D: evidence from firms’ patents, profits, and market value. American Economic Review, 76(5): 984-1001. Cohen, W., Nelson R., Walsh J. 2002. Links and impacts: the influence of public research on industrial R&D. Management Science, 48(1): 1-23. Hall. B., Jaffe, A., Trajtenberg, M. 2005. Market value and patent citations. RAND Journal of Economics, 36(1): 16-38. Belenzon, S. 2012. Cumulative innovation and market value, evidence from patent citations. The Economic Journal, 122(559): 265-285. Simeth, M., Cincera, M. 2013. Corporate science, innovation, and firm value. EPFL Working Paper Series no. 188283.

08:30-10:00 Session 11B: Collaboration, Teams & Networks
08:30
Teams in R&D: Evidence from US Inventor Data
SPEAKER: unknown

ABSTRACT. SHORT ABSTRACT: This paper exploits U.S. patent data and a panel of inventors listed on U.S. patents since 1975 to investigate the determinants of teamwork in industrial R&D. Inventor team size as well as the duration of collaboration among team members have increased over the past several decades. The focus of the paper is a test of a model of dynamic team formation where a firm must choose and then over time re-balance a team’s constitution taking into account the gains to specialization, costs of coordination, technological change, and the risks that employee members of the research team will appropriate the firm’s intellectual property. We use variation in policy towards noncompete agreements in employment contracts to identify the effect of researcher mobility and IP appropriation on team formation. We find that where researcher job mobility is low, teams tend to be larger and are more likely to repeat. Our evidence suggests that in assembling R&D teams, firms are sensitive to the costs of appropriation and/or coordination.

LONG ABSTRACT: Technological innovation increasingly occurs in teams, though team size varies across technological field, firm, geography and country. Firms that are unable to field large, diverse teams of researchers are arguably at a productivity disadvantage in R&D (see Wuchty, Jones, and Uzzi, 2007), and this disadvantage may be increasing. Firms that are unable to keep their teams intact over a sustained research campaign may also be at a disadvantage. The patent evidence shows that teams vary in longevity and that longer-lived teams are associated with higher-quality innovations.

This paper explores the determinants of team formation in industrial R&D, seeking to explain variation across time, location, and field in both team size and continuity. Our point of departure is a model of teams in which the optimal size and composition of teams balance the gains to specialization against coordination or information costs. In this framework, rising team size may be due to falling coordination costs, for example, because of improved strategies that limit free-riding and agency problems, or improvements in communication technologies. Team size may also be increasing because of the rising stock of knowledge; the optimal response by researchers to rising burden of knowledge may be greater specialization which requires more collaboration (Jones 2006).

Because we are also interested in understanding team persistence and the standard model is static, we consider a variant in which the firm manages its R&D workforce over multiple periods and in each period the firm and its researchers experience technological and human capital productivity shocks that change the optimal mix of skills and therefore workers, which potentially cause the researchers to depart to compete against the firm.

An innovation of this model is that it makes team formation part of the firm’s IP protection strategy. With increasing inter-firm mobility of scientists and engineers in the 1990s and early 2000s came many stories of high tech firms actively encouraging defections among competitors' workforces to access their technologies. R&D-performing firms have always faced the prospect that their workers will leave for competitors or start up firms of their own that directly compete against them. To the extent they face this appropriation cost they have an incentive to reduce the number of research personnel involved in a project but also to “compartmentalize” their research projects; that is, to spread the research tasks around greater numbers of researchers so that single researchers lack sufficient information to recreate the project on their own. Thus in addition to testing for gains to specialization and coordination costs as determinants of team size, team persistence and team composition, our empirical strategy tests as a determinant the threat of worker appropriation of IP.

To test our model of team formation and persistence in industrial R&D, we exploit a panel data set of researchers that includes all inventors listed on U.S. patents since 1975. Because a patent lists each inventor who instrumentally contributed to the development of the underlying invention, we are able to construct measures of teams in industrial R&D. We update the evidence showing teams have been increasing in size and that the number and impact of lone researchers are falling. We show teams are remaining together over longer periods of time and more projects. To identify the effect of mobility on team size and persistence we use variation in non-compete covenant enforcement across states and time. Non-compete covenants are commonly incorporated into employment agreements of researcher employees. The results from our regression analyses of team size and team persistence suggest that researcher mobility as well as firm-level technological characteristics and coordination cost do matter.

08:45
The Anatomy of Teams: Division of Labor in Collaborative Knowledge Production
SPEAKER: unknown

ABSTRACT. Teams are increasingly important in knowledge production, yet how teams divide tasks among their members remains ill-understood. Complementing recent work that views innovation as the recombination of prior knowledge in different disciplinary domains, we conceptualize knowledge production as a process involving a number of functional activities such as conceptualizing the research study, performing the experiments, analyzing data, and writing the paper. We develop a theoretical framework to study the functional division of labor in scientific teams that highlights three different perspectives and also suggests three associated measures: (1) an individual level perspectives considering to what extent team members are specialized vs. engage in multiple activities; (2) an activity-level perspective considering to what extent activities are concentrated among few team members vs. distributed across many, and (3) an integrated perspective considering which activities tend to be performed by the same team members as well as which activities tend to be performed by specialists vs. generalists. In the second part of the paper, we use this framework to examine division of labor empirically using novel data on the activities of all authors who contributed to over 13,000 scientific articles. The data are from the journal PLOS ONE, which ask teams to disclose the individual contributions of all co-authors. We find that division of labor is stronger with respect to some functional activities than others, likely reflecting differences in the benefits from specialization and the interdependencies between activities. We also find a wide distribution of degrees of specialization across individuals, and specialization is systematically related to individual characteristics such as professional age and prior scientific accomplishment. Consistent with economic theories, division of labor increases with team size but at a decreasing rate, leveling off well above the theoretical minimum. Thus, teams members do not specialize as much as they could, potentially reflecting high coordination and communication costs associated with high division of labor. Moreover, while the share of members performing empirical activities is largely stable across the team size distribution, the share of members engaged in conceptual activities declines sharply, suggesting that conceptual activities may benefit less from parallel processing by multiple team members. In the third part of this paper, we use the data to explore differences in the levels and nature of division of labor between projects of different types: projects that are in one discipline vs. multiple disciplines, in established vs. new fields of science, and projects performed by purely academic teams vs. teams with industry involvement. Overall, our paper advances the understanding of scientific knowledge production in teams in two important ways. First, we propose a conceptual framework that clarifies and conceptualizes functional division of labor and suggests a range of empirical measures that can be used for future work. Second, we use these measures to provide empirical insights into the division of labor in a large number of diverse projects. Given the difficulty of studying team processes at a large scale, these insights should be of interest in their own right; in addition, they illustrate an empirical approach that can be fruitfully exploited to address a number of important questions about the organization of scientific research in teams.

09:00
The emergence of molecular biology in the diagnosis of cervical cancer: A network perspective
SPEAKER: unknown

ABSTRACT. Cervical cancer is one of the most common cancers among women. About 530,000 new cervical cancers occur and cause about 275,000 deaths each year. Large screening programs are generally credited with its decreasing impact, though cervical cancer still represents an important issue in less developed countries. Diagnostic technologies based on cytological analysis of the cell at the microscope, such as the Pap test, have dominated screening programs for decades despite their well documented low sensitivity. The decline of this 'mono' diagnostic approach has started only in the late 1990s due to major advancements in molecular biology juxtaposed to earlier key pathological discoveries. Molecular biology enabled scientists to address the limited sensitivity of cytology-based testing technologies, thus spurring the emergence of a novel stream of diagnostics, which have changed the research landscape and clinical practices in cervical cancer considerably (Hogarth et al., 2012).

Different groups of actors may have significantly shaped the development and adoption of molecular diagnostic technologies in this domain. Innovation is indeed a 'distributed' process. Sources of innovation and agency are distributed across a large variety of actors. These actors with their different interests, visions, and expectations can steer, especially in the early period, the directionality of technical emergence (e.g. Stirling & Scoones, 2009). Within this process, the networks resulting from interactions among these actors have therefore a key role. They provide actors with access to knowledge and resources, distribute power and control, and enable actors to build a reputation that extends beyond their local peers. Certain actors can strengthen the ‘system of innovation’ by increasing the cohesion of the network through the mediation between actors otherwise weakly connected, whereas others can hold exclusive positions that in turn allocate power and control disproportionally.

Our understanding of the emergence process from a network perspective is limited. We have limited knowledge on how actors interact over different phases of emergence and how they differently leverage their network positions. We aim to fill this gap by examining inter-organisational networks over the emergence of the molecular biology in the diagnosis domain of cervical cancer research. To do so, we rely on the bibliometric analysis of 4,722 publications related to the diagnosis for cervical cancer (from 1980 to 2011). Bibliometric data are used to identify actors and construct inter-organisational networks (co-authorship). We distinguish the identified actors between different groups, namely 'institutional groups'. These are Research and Higher Education (RHE), Governmental (GOV), Hospital and Care (HC), Industrial (IND), and Non-Governmental (NGO) organisations. We then examine the dynamics of inter-organisational network at the level of dyads and triads. At the level of the dyad, we investigate patterns of tie formation within and between institutional groups and how those patterns evolve during emergence, while, at the level of the triad, we focus on examining the extent to which actors mediate between other actors belonging to non-overlapping groups. To do so, we identify different brokerage roles: coordinator, gatekeeper, representative, itinerant broker, and liaison (Fernandez & Gould, 1994). Qualitative insights on the case-study are used to complement these analyses.

Results show that the process of tie formation differs across institutional groups and over the different phases of emergence. Certain actors were more active in establishing intra-group ties (RHE) while others collaborated with organisational actors belonging in other groups (GOV, IND, NGO) more frequently. Groups of actors also profiled differently according to the different brokerage roles. For example, RHE were more likely to coordinate within the group and to act as gatekeeper. GOV organisations were more likely to act as itinerant broker and liaison over the early phases of emergence, whereas their gatekeeper role increased as the novel technology became more established. The HC, IND, and NGO groups were relatively less active in coordinating within their groups. These results are exploratory, but provide important insights on the role different types of actors may play over different phases of technological emergence.

References Fernandez, R. M., & Gould, R. V. (1994). A dilemma of state power: Brokerage and influence in the national health policy domain. American Journal of Sociology, 99, 1455–1491. Hogarth, S., Hopkins, M. M., & Rodriguez, V. (2012). A molecular monopoly? HPV testing, the Pap smear and the molecularisation of cervical cancer screening in the USA. Sociology of Health & Illness, 34, 234–250. Stirling, A. C., & Scoones, I. (2009). From Risk Assessment to Knowledge Mapping: Science, Precaution, and Participation in Disease Ecology. Ecology and Society, 14.

09:15
The role of macroculture in industrial renewal – networking and cluster policy perspective
SPEAKER: unknown

ABSTRACT. A major challenge for sectors today is how to continue to renew and adapt to radically new competitors, innovations, and opportunities emerging from across the globe. Building up collaborative networks and creating durable capabilities also gradually leads to strategic similarity, and technological traditionalism over time. Sectors of inter-related companies with shared strategic beliefs run a risk of losing the ability to adapt in the long run. Many innovation policy instruments are geared towards encouraging networking and knowledge sharing among actors, mechanisms which are seen as having positive effects on firms’ innovation performance. However, the very same instruments may potentially be detrimental for building industry’s capability to renew. We base our argument to the concept of ‘inter-organisational macroculture’ (Abrahamson&Fombrun 1994), which captures the observed phenomenon that managers across organizations within a broader field, such as industry or sector, share relatively similar industry related beliefs, perceptions and practices. Macrocultures can remain stagnant for extended periods of time if no external stimuli are introduced. Important innovation policy instruments that are currently in use, such as national or sectoral innovation programs, that aim to affect the interaction and collaboration between companies, are in fact shaping value-added networks. Because shared beliefs and perceptions are both generated by and mirror the networks that configure companies into such collective nets, the very instruments that are designed to grow relational and knowledge capital and promote innovation are simultaneously shaping macrocultures. Our objective is to shed light on understudied effects of important present network and cluster policy instruments on industrial macroculture and adaptive capability. The notion of macrocultures may provide an explanation for why we sometimes observe stagnation, industry decline and inadequate levels of innovativeness in innovation systems that appear well-developed. Our paper specifically asks what role networking and cluster policy has to the formation of macroculture? Given that innovation policy has been focused on networking to promote sharing of knowledge and views, the main concern is that how these incentives take into account the aspects related to similar worldviews and cognitive convergence that might potentially create inertia that prevents industries to renew. We believe that it is not only timely, but crucial for policy-makers to become aware and gain understanding of how promoting networks not only leads to the creation of competences, but may also affect the longer-term adaptive capability through intangible mechanisms that to date have not been connected to innovation policy. Network and cluster policies should be sensitive to dimensions that create macroculture and to design incentives that are receptive to capability building in companies required to adjust to the changes posed by industry. We explore forming of macroculture and its role in creation of adaptive capability in two industry contexts, game industry and forest-based bionenergy in Finland. Our primary data is survey data, which is collected by structured phone interviews to the main decision-makers of companies. In addition to survey, we have 20 semi-structured interviews with innovation system level, intermediaries and private companies’ respondents. We also utilise policy documents as secondary material. We approach the interorganizational macroculture and renewal from the policy instrument point of view, which means that we investigate specific collaboration promoting policy incentives in detail, and compare participant and non-participant groups to assess the role of policy instruments in formation of macroculture. The overall aim of study is to explore the formation of macroculture to understand its role in industry renewal. Survey aims to get information of the initial belief and attitude systems (probable macroculture) formed in the studied industries, which is complemented with interview and document data to address specific networking and clustering policies. Since the concept of macroculture is difficult to grasp and until today little empirical evidence exists (O’Neill et al. 2004 as exception), we aim to describe the macrocultures and to distil factors influencing the formation of macroculture by analysing similarities and differences between different groups of respondents. Our framework incorporates advances made in the strategic management and institutional organization literatures with the literature on innovation systems and policy. Renewal has traditionally been addressed with tangible indicators, but our approach is to introduce intangible aspects to the policy dialogue, and deepen understanding on industrial renewal. We might thus question whether macroculture has such strong damaging effect at all, like it has so far been in the main literature presented.

08:30-10:00 Session 11C: Star Scientists
08:30
Nobel Prize awarded research and commercialization – The role of the Laureates
SPEAKER: unknown

ABSTRACT. Even though there has been a general shift in science towards more collaborative research (Wuchty et al., 2007) earlier studies have highlighted the importance of the individual researcher in the translation of science. The idea is that knowledge transfers are mainly person-embodied, involving personal contacts, movements, and participation in national and international networks (Gibbons and Johnston, 1974).

Building on this notion, the concept of “star scientist” was first introduced by Zucker and Darby (1996). Their conclusion was that extraordinarily productive scientists in biotechnology act as both researchers and entrepreneurs and that they not only advance science but also play a key role in successful commercialization.

In this study we further elaborate on the concept of the top layer of researchers that have been identified as “star scientist” and their involvement in the knowledge diffusion process. Using the Nobel Prize as a proxy for excellent research this study focus on the dissemination of knowledge and to what extent the Nobel Laureates (NLs) have been involved in the translation and commercialization of their breakthrough scientific discoveries.

We focus on the NLs in Physiology or Medicine over the 35 last years (1978-2013). We explore to what degree these “star scientist” have been involved in commercial activities and engagements with industry. We collect information about all the 83 Laureates’ involvement in industry collaborations, patent activity, start-ups and scientific boards during their entire career. To be able to judge the actual link between the radical discoveries and commercialization we determine whether publications, patents and spin-offs are results of the discoveries that were awarded the Nobel Prize since this is the actual acknowledgment of excellence. Through in-depth interviews with 32 Nobel Laureates we discuss the motives behind engaging in different industrial activities.

Our results show that the Laureates are heavily involved in patenting, 71% have applied for a patent at least once in their career. These results are in accordance with earlier studies investigating “star” scientists. About 61% have patenting something that is related to their award winning discovery. The majority of the Laureates have taken all their patents together with their home research institution (49%) and 42% have collaborated at least once with a company in their patent application. Out of the total number of NLs, 38% (32 Laureates) have taken part in starting up a company. Only taken into consideration the spin-offs that are related to the Nobel Prize discovery the rate is 23%. The majority of spinoffs have been founded after 1995 and we can conclude that the Prize has not had an effect on the degree to which the Laureates start spinoffs. We also investigate the engagement with industry through Scientific Advisory Board and 55% of the Laureates have participated at least once in a Scientific Board over their career.

We can conclude from our interviews that the majority of the Laureates have not been the initiator of patenting. It has rather been initiated by external actors such as TTOs, co-workers (post-docs) and industry. Asking about what has happened to the patents the absolute majority of our interviewees conclude that nothing has happened and a very few are/have generated a minor income in the form of licensing. The general attitude is that patenting is not on top of NLs agenda and that it doesn’t bring much.

Basically two main reasons for starting a company can be filtered out from our interviews. Scientific results that don’t fit the line of the research in the lab and that need a place for further technological development. The second reason is through the contact of VCs that see a potential to start a company based on the NLs science or using their general scientific. The role of the Laureates when it comes to starting the companies is mainly that they are the main providers of the scientific ideas on which the company builds.

The most common selection criterion for joining the SAB of a company is that a friend/colleague/post-doc is involved, implying the Laureate is joining because they have a trust in the person. The majority of the interviews confirm that the Prize has a positive effect on the number of SAB invitations.

Even though our quantitative results point towards a rather high involvement of the NLs in the commercialization process the interviews stress that the Laureates are not the main driver for commercializing their research and radical discoveries. External actors are rather the main initiators in the translational process while the “star scientists” are driven by curiosity to answer specific fundamental research questions. Our results highlight the importance of providing a supportive environment for radical discoveries where others than the “star scientist” can continue to develop the discoveries to benefit society.

08:45
How do star scientist make high performance? : Destructive innovation in pharmaceutical industry
SPEAKER: Yasushi Hara

ABSTRACT. Star scientist emphasizes the role of scientist whom gathering and binding the tacit knowledge into creation process in R&D process among the firm. But, current study mainly focused on the economic impact of the existence of star scientist but it did not describe concretely how and why star scientist emerges his competitive advantage among the institution and even the academic competition in mid/long term.

Hence, I aim to verify these arguments by means of quantitative/qualitative analysis of path-breaking drugs; (1) how star scientist emerges its internal/external scientific network as times goes by. (2) Do star scientists yields more than the average within the firm. (3) What is the main factor for star scientist who yields deliverables such as product innovation?

In doing so, I employ network analysis theory. To capture out the network flow of the activity of star scientists comparing with his workmate whom did not archive successful deliverables in his research work, I use Web of Knowledge and Thomson Innovation, which has co-author/co-inventor and backward/forward citation network information of scientific paper and patents and mainly focused on pharmaceutical industry and its R&D process of block buster drugs. And, as for capturing out the whole picture of research process precisely, concentrate on Japan’s blockbuster drug case such as Actemra [tocilizumab] and Statin [compactin, pravastatin, and rosuvastatin]. Then, name-matching for combining these datasets, and finally build up network graphs and computing network indicators to capture out network activities.

Procedures are constituted as follows; (1) Firstly, identifying the star scientist of blockbuster drug by the information of patents and/or scientific paper’s bibliographic information. Hence, star scientist in this study is virtually the corporate scientist who discover and identify the core of blockbuster drug. (2) Summarizing the activity of scientific paper/patents of star scientists then taking the snapshot of internal/external network flows for certain time window to realize scientific flow between the inside and the outside of organization. (3) If knowledge flow cannot be detected by procedure (2), then focusing on backward citation data to aim to trace scientific contribution for the invention. (4) From (2) and (3) process, it identifies what is the essential scientific knowledge for innovation process. And to check the robustness of the study, I make oral-interview with star scientists to verify his/her own role in R&D process.

Throughout the study, there are some findings that; (1) Star scientists has strong external network in R&D process. It might help the scientists to accumulate the knowledge, which is consistent with evidence of oral interview. (2) Star scientist has connectivity with foreign distinguished academic researchers, which also emphasize the role of knowledge accumulation. (3) But, if the essential knowledge has been established in prior of time, it is hard to trace by bibliographic data and network graph analysis as argued in compactin’s cholesterol assay modification process and pravastatin’s microbe production system. (4) And if the scientific discovery by scientist in the university and invention process by star scientist in the firm are connected or occurred in the same timing, knowledge flow could be traced by 1st-tier co-authorship data of star scientist in the firm.

From these findings, there are some implications that governmental financial and human-relational supports for basic science should be continued as scientific source of innovation in the perspective of science and technology policy, and there should be some mechanisms that is connecting entrepreneurial capability by firm and academic research activities by university/institution. In this sense, star scientist should be acted as “gatekeeper” whom imports external knowledge from academia and to stimulate internal absorb capability of the firm. In doing so, management team should (1) give authority him/her to have flexible research activities and/or (2) enforce and control him/her research with explicit and tangible research strategies. In fact, including Statin’s case, some path-breaking drugs developed in Japan are based on researcher’s informal research activities called as “Yami-Kenkyu.”.

And as the contribution for network theory, we should develop the method for detecting “un-traceable” knowledge flow which is not appeared as co-authorship and/or citation data. Complementary method is using oral interview with star scientist to realize essential scientific knowledge but it is ad-hoc option. Alternative method is econometric text data analysis to pick up essential scientific knowledge by using star scientist’s key scientific papers and/or patents. This is the limitation of this study and it will be my next study to find flexible method to determine knowledge flow between science and innovation automatically.

09:00
The Effect of Holding a Research Chair on Scientists’ Productivity
SPEAKER: unknown

ABSTRACT. Scientists’ academic performance has been extensively discussed and many of its determinants are currently known as potential motives for publishing papers in peer-reviewed journals. Among others, age, gender, private and public funding, institutional setting, field and context are the most important determinants. In addition to them, the networking capability of a scientist can explain the number of journal papers. Most of the studies on the effects of networks rely on co-authorship as a proxy for scientific collaboration. In this paper, however, we focus on the effect of holding a ‘research chair’ as a possible determinant of scientific publication. On the one hand, it may help the holder of this chair to be liberated from the constant quest for research funds or to have time to construct a more effective network, which may result in propelling future knowledge production. On the other hand, greater scientific productivity may simply be the effect of the past performance of a scientist, implying an intrinsic ability of scientists in conducting research and/or in mobilising effectively its extensive networking capacity. Considering holding a chair as some kind of measure of prestige, we aim to elucidate the effect of being a ‘chair-holder’ on scientific productivity by testing the following hypothesis: Holding a chair increases a scientist’s performance measured in terms of number of publications. In order to validate our two hypothesis, we built a data set based on the integration of data on funding and journal publications for Quebec scientists. For publications, Thompson Reuters Wob of Science provides information on scientific articles (date of publication, journal name, authors and their affiliations). In terms of funding, we use a database of Quebec university researchers (Système d’information sur la recherche universitaire or SIRU) gathered and combined by the Ministry of Education, Leisure and Sports. This database lists the grants and contracts information, including yearly amount, source, and type for the period of 2000-2010 for all Quebec university scientists. According to the chair characteristics, the networking and prestige effect of ‘holding a research chair’ may be mixed with the effect of funding. The novelty of this paper is to use a matching technique to understand whether holding a research chair contributes to a better scientific performance. This method compares two different sets of regressions which are conducted on different data sets: one with all observations and another with only the observations of the matched scientists. Two chair and non-chair scientists are deemed matched with each other when they have the closest propensity score in terms of age, research field, and amount of funding. The results show that holding a research chair is a significant scientific productivity determinant in the complete data set but its significance is limited (only for Canada Research Chair) when only matched scientists are kept in data set. In the other words, in the case of two similar scientists in terms of gender, research funding, and research field, holding a chair has significant and positive effect on scientific productivity only if the chair type is a Canada Research Chair. A number of factors can contribute to explaining this finding. The first is that the Canada research chair intends to be a prestigious research sign in Canada. Based on its mandate, the Canada research chair program aims to attract and retain some of most accomplished and promising minds in the world and it is awarded to scientists from all disciplines including engineering and the natural sciences, health sciences, humanities, and social sciences. It is more prestigious than any other research chairs and the holders are expected to be more capable in expanding their academic network. Other scientists may also have more willingness to conduct collaborative research with the Canada research chair holders. The second explanation is that industrial chairs are appointed by firms to promote research and its application, probably with major benefits to the firms themselves and as such, serve an entirely different purpose. In other words, this type of chair is not necessarily and originally designed for the sake of scientific publication. The chairs appointed by research councils may have quite similar characteristic.

09:15
Environments encouraging Radical discoveries – the Nobel Laureates and their career path
SPEAKER: unknown

ABSTRACT. The contribution of the very best scientists when it comes to economic growth has received increasing interest from both researchers and policy makers (Zucker et al., 2002; Azoulay et al., 2012). Given the importance of these stars it is of importance to understand the environment where they are active during their training, career and not the least where breakthrough discoveries take place. According to Mahroum (2000) there exist a mutual relationship between mobility and excellence since highly talented scientists are attracted to sites that have a high reputation for excellence and the presence of other outstanding researchers. Meanwhile these sites increase their credibility and capabilities through hosting such star scientists.

In this study we further elaborate on the concept of “star scientist” by studying Nobel Laureates (NLs). First we explore the different roles institutions play in stimulating the research creativity of future NLs. This is done by investigating where the NLs have carried out their undergraduate studies, research training (PhD and postdoc), early career, late stage career and then relate these career steps to when the radical discoveries where made. Secondly, we recognize the importance of proximity to other excellent researchers (Zucker and Darby 2008, Audretsch and Aldridge 2009, Ham and Weinberg 2011) and further explore how NLs relate to each other. By doing this we are examining the interplay between the role of the institution and the individual.

We focus on the career paths of all NLs in Medicine or Physiology between 1962-2013 (125 winners). Key publications (KP) that for the first time present results of the radical discovery that eventually resulted in the Nobel Prize were identified from the Nobel Committee Prize citations, the Nobel Lectures and where possible confirmed with the Laureates. Career paths were divided into before or during the first breakthrough research. The accumulative citation rate for each year was collected for all NLs that have acted as supervisors to future NLs. We also carried out interviews with 32 Laureates and finally selected a number of case study institutions to further understand the environments where the radical discoveries were made.

To set the scene we calculated the number of individual NLs who worked at each institution at any stage in their career. Our results indicate that Harvard is the institution where most NLs have studied or worked followed by Cambridge and NIH. If we shift our focus from the absolute number to where most NP awarded discoveries have been made Cambridge comes out first followed by Rockefeller, NIH and Pasteur. Focusing on where the NLs where based when they received the Prize, once again changes the ranking of institutions, Harvard coming out first followed by Rockefeller, Cambridge, Pasteur and MIT.

To further understand the role of institutions when it comes to stimulating research creativity we investigate where the NLs have received their early research training. We find that different institutions have played different roles during the NLs career. First we cannot identify any institution that has played a significant active role at the undergraduate level of NLs. It is when the NLs carry out their research training (PhD and postdoc) we can identify that certain institutions play a more important role with the most common being Cambridge, NIH and Harvard.

To further understand the environment when the radical discoveries are made we can conclude that the NLs arrived at the institution where the discovery was made a median of 5 years before the first key publication and the next move takes place a median of 16.5 years later. Thus radical discoveries generally came early on in a long stay at an institution, where NLs then established themselves, perhaps to capitalize on the breakthrough or as a function of a normal career path.

Next, we examined at what career stage NLs made their radical discovery. Of the 125 NLs the majority were early career researcher. On average NLs published their first KP an average of 11.16 years after graduating from their PhD or MD.

All of the institutions with a high number of NLs also had notable numbers of interactions between Laureates. Notably, 44% of all NLs had one or more Laureate supervisors during their PhD, MD or postdoctoral training. Of these 56 supervisory relationships, only 13 occurred after the supervisor had received their prize, indicating that it was not the case that the most promising young researchers sought out NL supervisors.

From the interviews we can conclude that in addition to formal supervisory relationships, mentor relationships with other Laureates are common. Several Laureates highlight the importance of being connected to great scientists from early on in the career.

Our presentation will further elaborate on mobility, institutional belonging, and proximity to other Laureates.

08:30-10:00 Session 11D: Bibliometric Techniques
Chair:
08:30
Analyzing the Strategy and Impact of Funding Organizations with Funding Acknowledgements
SPEAKER: unknown

ABSTRACT. Since 2008 the Web of Science database (WoS) includes funding acknowledgements (FA). With this new kind of information it is now possible to execute large-scale quantitative analyses of the scientific output financed by specific funding organizations (FO). However, it was previously shown that the data quality of the entries is problematic at best. One of the problems is the vast variety of aliases for a single FO in the database. For example the largest German FO, the German Research Foundation (DFG) has over 10000 different aliases. The authors have previously developed a fully automated method to unify all these variants with minimal manual labor (Sirtes & Riechert, 2014). On the basis of this method different German and major European FOs have been portrayed (Sirtes, Riechert, Donner, Aman, & Möller, 2015). The topical orientation, international collaborations, European co-funding, and the impact of the publications funded by these organizations have been characterized. One of the more surprising results of this study was the calculated impact of the two major funding organizations in Germany. While the share of journal articles funded by the DFG in the 10% most highly cited publications in the world for the years 2010 and 2011 was 15.5%, this share was as high as 19.3% for the publications funded by the Federal Ministry for Education and Research. This large difference was not expected given the high prestige associated with funding by the DFG with its elaborate peer review process. Our working hypothesis for explaining this result is the different thematic funding strategy of these FOs. While the DFG is committed to funding research in all its scholarly diversity, the Ministry is primarily focused on promoting promising new developments of science (hot topics) and research areas with potentially high societal impact. This difference could account for the discrepancy in impact. From a bibliometric perspective this hypothesis can be operationalized in the following manner: The impact measures employed for the impact analysis used the WoS Subject Categories (SC) classification scheme. However, it is known that these SCs are far from homogenous, as (van Eck, Waltman, van Raan, Klautz, & Peul, 2013) have shown, different topics inside an SC may have widely diverging mean citation rates. Therefore, if the difference in impact of publications funded by these FOs is due to the thematic distribution of research, the publications should also be associated with research topics of different mean citation rates. Thus, a much finer grained publication level clustering method then the WoS SC classification is employed for the publications in the 19 SCs with most publications for both FOs and the mean citation rate of these clusters are calculated (without the inclusion of the papers in question). (based on Waltman & van Eck, 2012). If our hypothesis is true, the null hypothesis of a similar distribution of mean citation rates of the fine grained clusters should be rejected. The short introduction into the unification method of FA and its limitations, the different possibilities of descriptive FO portrayal, and the discussion of the results of the thematic strategy analysis will exemplify the new and important potential of funding acknowledgements for the understanding of scientific funding structures. Sirtes, D., & Riechert, M. (2014). A Fully Automated Method for the Unification of Funding Organizations in the Web of Knowledge. In E. Noyons (Ed.), Context counts: pathways to master big and little data. Proceedings of the 19th Internationatl Conference on Science and Technology Indicators, 2014 Leiden (p. 594–597). Sirtes, D., Riechert, M., Donner, P., Aman, V., & Möller, T. (2015). Funding Acknowledgements in der Web of Science Datenbank. Neue Methoden und Möglichkeiten der Analyse von Förderorganisationen. Berlin: EFI. Van Eck, N. J., Waltman, L., van Raan, A. F. J., Klautz, R. J. M., & Peul, W. C. (2013). Citation Analysis May Severely Underestimate the Impact of Clinical Research as Compared to Basic Research. PLoS ONE, 8(4), e62395. doi:10.1371/journal.pone.0062395 Waltman, L., & van Eck, N. J. (2012). A new methodology for constructing a publication-level classification system of science. arXiv:1203.0532 [cs]. http://arxiv.org/abs/1203.0532

08:45
Map of Science based on unsupervised learning
SPEAKER: unknown

ABSTRACT. A central challenge for the cartography of scientific knowledge is the creation of valid and accurate coordinates. This submission discusses the choice of the origin of coordinates in order to make a map of scientific knowledge, and, in particular, demonstrates the advantages of unsupervised-learning assigned coordinates over human-reasoning generated ones. Human assigned meta-data, such as subject category classification of articles or journals, has been the dominant source for coordinates in science maps (even when cartographers have relied on co-citation information). (Börner,2010) However, classification of scientific knowledge with such meta-data is subject to several – well known weaknesses. Pre-existing categories of science provide an finite definition of new knowledge, fitting knowledge that by definition is infinite and new to the world into pre-existing categories and coordinates.(Small, 2004) They are best at monitoring the behaviour of known and defined bodies of knowledge, but lend themselves poorly – if at all – to identify correctly the emergence of truly new epistemic bodies of knowledge. Literature on structuring science focuses on classification and mapping, which should not be understood synonymously (Klavans and Boyack, 2009). Classification of science – the process of separating science into differentpartitions – is a precondition of the existing mode of scientific dialogue. The need to define research fields and to assign journals and publications to these stems from the need to create a information retrieval system that would help scholars to find relevant information. As Glänzel and Schubert (2003) correctly argue, for scientometrics the correct classification of publications to scientific fields is also a necessity. As we talk about scientometrics, specifically in the sense that it is a system of measurement, we make the assumption that there exists a standard measure (metric) of bibliographical, scientific patent information that is readily available and applicable to a practical problem, which is similar to the way we use metrics for weights or distances. However, this becomes more complex when we focus on research management, as the existing information retrieval based classification system lacks in capability of producing a consensus measures for scientometric studies (Glänzel and Schubert, 2003). Recently, advances in computing have made available text-mining techniques that offer a new approaches to define coordinates for science maps. Text mining opens new avenues for unsupervised or semi-supervised classification methods, as they classify scientific text based on content foregoing the human-given labels. Indeed, text mining and machine learning methods are a promising tools in classifying fields of science (Glenisson et al.,2005). Motivated by the possibilities of unsupervised classification, we analyze science publications with Topic modeling, showing an example of unsupervised classification. Our raw data consists of 144 081 science publication Web of Science records, between 1995 and 2011, where at least one author having an affiliation in Finland. We follow a research design, where we first pre-process existing raw data via a Python script, then use an implementation of variational EM for LDA by Blei et al. (2003) to classify the records, where the number of topics were set by qualitative evaluation to 60 topics. Finally we incorporate the LDA results to the already existing metadata and OECS major classifications for analysis and visualization. We are able to draw out relevant disciplinary ares for Finland that are in enough detail to point towards meaningful areas of science, such as classifications like Topic 44 associated to terms ”exposure”, ”exposed”, ”asthma” and ”lung” or Topic 38 with ”pregnancy”, ”maternal”, ”infants” and ”neonatal”. Merging the classification of each document with the OECD category of each publication, we cross references differences of existing classications and unsupervised learning based classes. In this, we were able to highlight in more detail the benefits or unsupervised learning. Unsupervised learning based classification of science is adaptable to different levels of abstractions and highlights the cross-discplinarity of science better than existing classifications. Our results suggest that although the human assigned approach to classifying science is the dominant source for coordinates in science maps (Börner, 2010), there is clear value in creating automated classification of science based on author generated semantic text. While existing classification are suitable for narrowly focused research outlets and confined research problems, the societal grand challenge based multidisciplinary research operates outside traditional classification boundaries and poses a challenge for traditional metrics but not to unsupervised learning methods.

09:00
The Characteristics of Global R&D Cooperation of Influenza Virus Vaccine Based on Scientometrics Analysis
SPEAKER: unknown

ABSTRACT. Influenza virus vaccine (IVV) is a promising research domain closely related to global health matters, which has been acknowledged not only by scientists and technology developers but also by policy-makers. Many countries and public health organizations have initiated their research and development (R&D) projects in various forms, such as a huge investment in technical platform and infrastructures. However, the enhancement of a nation’s IVV innovation capability depends not only on domestic R&D investment but also on international R&D collaboration. It is thought that every country joining the international collaboration can mutually benefit from this cooperation. So, understanding the global R&D cooperation network of IVV field seems to be of crucial importance from the point view of innovation policy. Unfortunately, to our knowledge, the available research fails to thoroughly capture this perspective. Meanwhile, papers and patents encompass valuable scientific and technological information and collaborative efforts, providing a reliable quantification basis for technology or industry development studies. Yet although there are a lot of literatures to analyze the technology and industrial development by using scientometrics methods, IVV studies using paper and patent bibliometric methods are not fully developed.
To be specific, this paper tries to answer the following questions: (1) what is the structure and the dynamics of the global R&D cooperation network of IVV and its subfields? (2) What positions do countries occupy in the global R&D cooperation network of IVV and its subfields, especially China?
According to the literatures and opinions of China’s IVV experts, this paper designed the technology classification system of IVV based on the technical standards of influenza vaccine. IVV can be divided into four major categories, including inactivated vaccine, live attenuated vaccine, recombinant vaccine, and synthetic peptide vaccine, whereas it can be further divided into eight subcategories: inactivated virus vaccine, split vaccine, subunit vaccine, live attenuated vaccine, recombinant protein vaccine, recombinant vector vaccine, recombinant DNA vaccine, and synthetic peptide vaccine. This paper studies the global R&D cooperation of IVV field from the perspective of paper and patent analysis. As we known, international collaboration papers are mainly published in international journals. So the paper data of this study was retrieved from the Web of Science. Similarly, international patent activities tend to the European Patent Office (EPO) containing information on a worldwide coverage of patent applications submitted to around 90 patent offices in the world. The EPO-granted patent has a higher internationalization degree, and it can indicate a higher quality of the invention. Thus, the patent data for this study was retrieved from the EPO database. Moreover, the network model can better represent the reality of global R&D cooperation and provide some useful tools, such as addressing nodes’ position and power in a network. We identify the global R&D cooperation network of nodes V, as the countries and of arcs A, as the bilateral relationships that exist whenever a paper/patent belongs to at least two countries. Each node is weighted by the total amount of papers/patents researched/developed in join collaboration for each country. By applying network analysis, we graphically delineated the characteristics and evolution of global R&D cooperation network of IVV field and analytically explored the countries’ positions and relationships among them in the network.
The results show that IVV’s total papers and patents have continuously increased over the last decade, and the papers and patents resulting from global R&D cooperation also present a steady growth. The international scientific collaboration is broadly distributed over many countries, while the mutual connections of members are few in international technological collaboration network. China has achieved a spectacular growth in both of papers and patents of IVV field. However, as a developing country, certain gaps have existed between China and the developed countries. It is even more necessary for China to develop various forms of international collaboration, making full use of information, technology, capital and equipment abroad to upgrade its own R&D capability to narrow the gaps. Specific suggestions as follows: (1) Optimize layout. Carry out more research on SPV subfield. (2) From quantitative growth to qualitative growth. Strengthen the international cooperation in key subfields. (3)Consider enterprises as the principal part and improve the internationalization level of technological innovation. (4) Government should build a bridge to develop the worldwide cooperation of Industry-University-institution-Government.

09:15
The global technology map: tracking patterns of related and unrelated technological invention by multinational firms.
SPEAKER: unknown

ABSTRACT. Technological invention is key to ensuring economic growth and addressing societal challenges. In this era of globalization, understanding invention increasingly requires understanding the invention patterns of multinational corporations. Much of the analysis of corporate technological invention has however focused on the study of aggregate output of technological invention rather than the underlying patterns of technological development. As technological change is a cumulative and path-dependent process, these patterns do provide valuable information to policy makers and managers. The path-dependence of technological development is captured by the concept of related variety (Saviotti and Frenken, 2008). Innovation through related diversification (related variety) occurs when new innovations have a short technological distance to other pieces of the knowledge base, whereas longer distances between the innovation and the existing knowledge base of a firm characterize unrelated diversification.

While the importance of measuring technological distance has been recognized in innovation studies, the concept has been difficult to capture empirically (Bar and Leiponen, 2012). Most studies focus on the technological distance between firms (e.g., Benner and Waldfogel, 2008; Nooteboom et al., 2007; Gilsing et al., 2008) rather than on the underlying technological distance between different pieces of the knowledge base. This latter focus enables us to not only identify past patterns of technological development but also to identify fruitful building blocks and directions for future technological development. This is the main idea of our recent development of the global map of technology (Schoen et al., 2012). In analogy to recently developed global maps of science (Leydesdorff and Rafols, 2009) and economy (Hidalgo et al., 2007), the technology map captures the relatedness or distance between pieces of technological knowledge based on the co-occurrence of technological classifications on patents. As our classification of technologies we use an extended version of the WIPO classification of technological fields, unfolding the 35 classes to 389. The more often a code is assigned to patent documents within one area together with codes from another area, the stronger the relationship between those codes and the shorter the (technological) distance between the technological areas to which these codes belong. The global technology map thus provides a “bottom up” measure of the technological distance between different technological fields.

We use patents as our measure of technological development. While patents are considered an intermediate indicator, data availability and the possibility to capture also emerging technological fields explain their widespread use in the study of technological change (Archibugi and Pianta 1994; OECD 2009). In order to overcome some of the difficulties associated with the use of patents as an indicator for technological change, we use a subset of all patents for the construction of the global technology map. More specifically, this research uses the Corporate Invention Board (CIB) dataset (Alkemade et al., in press). The CIB combines patent data from the PATSTAT database with financial data from the ORBIS database about the 2289 companies with the largest R&D investments. The industrial corporations included in the CIB account for 80% of world total private R&D, of the 2289 MNC’s, 730 have their corporate headquarters in Asia, 1002 in Europe and 538 in Northern America.

In this paper we use the global technology map as the base map for interpreting the inventive activity of the CIB firms. More specifically, we first project (or overlay) the portfolio of each firm on the technology map, and calculate measures of portfolio breadth (longest path) and specialization (average path length). Second, we analyze the extent to which firms diversify into related technologies fields and whether these patterns of invention differ among technological fields, sectors and home countries of the multinational corporations. Our results show important differences in inventive strategies.

08:30-10:00 Session 11E: Innovation Policy and Governance
08:30
Is it time for innovation or stricter standardization? China’s air purifiers and the eco-innovation framework

ABSTRACT. China’s air pollution has reached a critical state, with high levels of heavy coal-smoke and air pollutants such as ozone and particulate matter (Clean Air Alliance of China, 2013; van Donkelaar et al., 2012). The most recent official report (Ministry of Environmental Protection [MEP], 2015) shows that during 2014 only 8 of 74 cities monitored by China’s Ministry of Environmental Protection meet air quality standards. To tackle this situation, the country has slowly introduced a new type of control and regulation of air pollution (Chen et al., 2011), which considers public attention to particulate matter (PM2.5) and scientific evidence of health effects (Chen et al., 2012). Public awareness of air pollution has also produced another phenomena: massive sales of air purifiers for household purposes. Main retailers have increased their sales and prices since 2011, as shown in several studies and reports (ZOL 2013, Daxue 2013). Just in the year 2013, there was 80-100% growth year-on-year compared to 2012 (Daxue 2013). Not only sales have increased but also consumers have learned more about the type of air purifiers, filters and tests that might guarantee a better performance of a device (Duggan 2014). Therefore, new pressure on manufacturers and retailers has raised to catch-up with foreign firms and provide transparent information to consumers (Hong 2013). In a broader context, in 2012 China introduced a comprehensive plan for air pollution prevention and control in key regions (Clean Air Alliance of China, 2013), through the 12th Five-Year Plan period (2011-2015), setting the nation’s main policies for the term. This mechanism has strengthened the standards to regulate different subjects as innovation, ambient concentration targets, emission reduction targets, city air quality plans requirements and key control areas (47 cities). As shown by some authors (Lin and Elder 2013, Mao et al. 2014), these major developments have been accompanied with constant strengthen of regional and local plans, and also a degree of resistance to implement urgent measures (e.g. industrial restructuring, regional management system). The central Government has seen the massive sales of air purifiers (prices, quality and tests) as another urgent area to regulate. Today’s existent standards and testing certificates do not follow the state of the art of air purifiers (Cha, 2014; Hong, 2013), and were set before 2011 (GB /T18801-2008, GB 21551.3-2010). This paper has two purposes: first, it presents a historical evolution of the consumption of air purifiers for household purposes from 2011 to 2014; and second, it discusses how the broader context of a new national air pollution policy has produced urgency to standardized air purifiers innovation, production and testing. The main sources of the paper are governmental documents (policies and standards) and media reports (including Chinese house appliances specialized journals). A case study is presented, based on the information of standards and tests of main online retail companies and their shops in taobao.com and jd.com. I affirm that central government has a policy dilemma of whether controlling the market of air purifiers (prices and patents) or introducing updated standards of air purifiers and tests to prove their quality. Some governmental deliberate inaction has been partially due to the pressure of national interests to allow national companies to catch-up with foreign brands knowledge and create better conditions to compete with them. However, this has undermined the trust to those same companies that still use out-of-date tests or simply underestimate the regulations. Through this case, I raised a theoretical discussion upon recent literature in eco-innovation (OECD, 2011; Popp, 2006), focused on the question of how appropriate would be in solving China’s standardization dilemma and which kind of limitations might face. I stressed the positive role of standards-setting process when carried out in an open way and with the participation of different stakeholders (Vollebergh & van der Werf, 2014).

08:45
Cultural Correlates of National Innovative Capacity: A Cross-National Study of Non-Institutional Dimensions of Innovation
SPEAKER: unknown

ABSTRACT. Although it is conventional wisdom that innovation requires free mind, diversity, or creativity all of which are closely associated with political and organizational decentralization, it is in fact more politically centralized countries in East Asia that successfully capitalized on innovation to catapult their economies onto the growth trajectory. Scholars have thus wondered if this is an exception rather a rule. Are more centralized countries innovative? Existing empirical research has produced mixed results. This study introduces a new perspective on this issue. Rather than the degree of centralization found in formal institutions, we focus on non-institutional or informal dimensions of centralization particularly associated with culture. Using two cross-national dataset capturing national culture (Hofstede and GLOBE), we explore how different dimensions of national culture are linked to national innovative capacity as proxied by patents. Our preliminary findings from the analysis of 34 OECD member states based on the patent data extracted from the Patent Cooperation Treaty (PCT) database suggest that non-institutional dimensions of centralization account more for the variations in national rates of patents per capita than more formal aspects of centralization measured by traditional political datasets such as POLCON. While cultural aspects have been examined in technology management at the individual and the firm level, this study fills a gap in the existing literature by exploring their relationship at the national level. More research is clearly needed to explore the roles of non-institutional features facilitating or hampering innovation.

09:00
Tensions of STI policy in Mexico: analytical models, institutional evolution, national capabilities and governance
SPEAKER: unknown

ABSTRACT. From a systemic/evolutionary approach, it is argued that science, technology and innovation (STI) policy has to focus on the national system of innovation (NSI), the generation and absorption of knowledge as nonlinear dynamic models, and systemic failures. For this approach, knowledge, accumulated capabilities, and time are important, institutions mediate between agents, and there is a growing concern for the regional level and governance of the NSI (Metcalfe, 1995; Teubal, 2002; Edler, Kuhlmann and Smits, 2003; Woolthuis, Lankhuizen and Gilsing, 2005; Smits, Kuhlmann and Teubal, 2010). In some ways, the models of STI policy associated with international organizations (mainstream models) have drawn on this perspective, or adopted a particular interpretation of it. These models have been successfully applied to so-called newly industrialized countries (eg. Korea, Singapore, Taiwan), which have displayed good behavior in their economic indicators as well as in those related to their domestic national STI capabilities. They have been also extended towards the BRICS countries with varying degrees of success (Cassiolato and Vitorino, 2009).

The main features of this model are: a policy focus that stresses STI infrastructure, human capital formation, joint development between academia and businesses, start-ups based on knowledge, small and medium-size enterprises (SMEs) and innovative clusters, a policy mix centered on demand-side measures as well as building capabilities and fostering linkages, and a policy governance based on multi-level coordination and participation. Business innovation is at the center of these policies (OECD, 2010a). These models have been offered as recipes for the developing world, and for some of the emerging economies within it. While many developing countries have improved the performance of various STI indicators (the contribution of businesses to research and development expenditure, the number of researchers per economically active population, and to a lesser extent other indicators such as patents), success has been limited when compared to the indicators observed in OECD countries, and even more so considering that structural problems of these economies persist (levels of inequality, poverty and social exclusion). The OECD (2010a) argues that the objective of the Innovation Strategy this organization promotes is to support a process of policy development, recognizing that ‘one size does not fit all’. However, recommendations are strongly tied to successful experiences. Several authors claim that these models do not adapt to the prevailing conditions in developing countries (Arocena and Sutz, 2002; Intarakumnerd and Chaminade, 2007; Dutrénit and Ramos, 2012; Dutrénit, 2012). In other words, the analytical framework used for STI policy design was conceived based on countries with different initial conditions - the central economies. In addition, these countries have a different trajectory of institutional building. All these contribute to feeding tensions that militate against the building of a sustainable NSI.

A central issue in understanding the trajectories and the chances of success or failure of policies emanating from both the mainstream models and variants that aim to adapt to developing economies is to analytically conceive the role of the institutional framework and governance at national, sectoral and regional levels.

Drawing on a systemic/evolutionary approach and being aware of the difficulties to interpret recommendations bearing in mind that ‘one size does not fit all’, the aim of this document is to discuss the experience of STI policymaking in Mexico, considering the interaction between the trajectory of institutional building and the process of construction of both the government and governance of the NSI.

This presentation describes the trajectory of institutional building in the arena of the STI policy in Mexico and summarizes the main arguments coming fromthe mainstream of the innovation policy framework, discusses the main features of the institutional framework and governance in Mexico, including the evolution of the legislation in STI and the expression of the stakeholders, discusses the rules of the game of the innovation processes and their impact on system governance, and finally discusses tensions that hamper the functioning of the system in a self-regulated manner.

08:30-10:00 Session 11F: Policy Decision Making
08:30
Of Mice and Reagents: Standardization, Variation, and Quality of Care
SPEAKER: unknown

ABSTRACT. This paper draws on the history of genetically engineered laboratory mice to explore the interplay between scientific research, regulation, and standardization. We link the degree of agency attributed to these animals by research scientists and judges to the quality of care that human patients receive. We then outline an ethical and biological rationale for balancing the current reliance on standardized laboratory animals with increased emphasis on variation.

To begin, we discuss a growing tendency among biomedical researchers to refer to genetically modified laboratory mice as reagents, chemical compounds that are highly valued for their reliability and malleability. For example, The Jackson Laboratory, which breeds and manages more than 5,000 strains of mice for research on human health, compares these strains to reagents in marketing presentations to emphasize that their highly standardized mice will produce consistent results. A similar propensity is evident in some recent biology publications, which describe living research animals as if they were devoid of agency—in other words, devoid of the capacity to exert power, to affect their surroundings, or to act in ways that do not conform with expected parameters.

We trace the regulatory origins of mouse-reagent comparisons to the landmark U.S. Supreme Court case Diamond v. Chakrabarty, 447 U.S. 303 (1980), which allowed the patenting of genetically engineered single-celled organisms. In this case, the court affirmed an earlier ruling that modified bacteria should be seen as “much more akin to inanimate chemical compositions such as reactants, reagents and catalysts than. . .to horses and honeybees or raspberries and roses” (Kevles, 1994, p. 120). The judge who issued this initial ruling suggested that it would be “far-fetched” to expect that patenting bacteria could lead to the patenting of other species. Yet eleven years after his ruling was issued, it lent precedent to two subsequent U.S. Supreme Court rulings that extended patentability beyond bacteria to “non-naturally occurring, nonhuman multicellular organisms” such as genetically engineered mice. The same judge also argued that “the fact that microorganisms, as distinguished from chemical compounds, are alive is a distinction without legal significance.”

We draw on empirical evidence from biological research, as well as scholarship in communication ethics, to demonstrate that this distinction is significant. Legally and morally, the labels attached to genetically engineered organisms can affect their treatment within and beyond the laboratory. Denying the agency of these living mammals undermines the ethic of care that underlies animal welfare policies and reduces biology, the study of life, to the study of non-living things.

From a scientific perspective, the practice of conceptualizing mice as reagents may stem from a broader trend to develop highly standardized animals for biomedical research. On one hand, reducing genetic variation can increase the reliability of experimental results and help fulfill regulatory mandates to minimize the number of animals used in laboratories. At the same time, in some cases standardization can limit researchers’ ability to model the range of human response to disease. This is especially relevant when investigating treatments for women, who may be less likely to benefit from data generated by standard male mouse models. To address these ethical and practical limitations, we propose that scientists, regulators, and funders attribute greater agency to genetically modified mice. Specifically, we recommend examining genetic variation in both males and females, studying mouse interactions in more natural conditions, incorporating a more diverse range of animals into laboratory research, and publishing results that indicate where mouse biology diverged from human expectations.

Works Cited

Kevles, D. J. (1994). Ananda Chakrabarty Wins a Patent: Biotechnology, law, and Society, 1972-1980. Historical Studies in the Physical and Biological Sciences, 25(1), 111–135. doi:10.2307/27757736

08:45
Do Policymakers think about Disruptive Innovations? The Case of Congress and Autonomous Vehicles
SPEAKER: unknown

ABSTRACT. Disruptive innovations are distinguished from incremental innovations in that they substantially change the foundations upon which the existing technologies are based. That is, disruptive technologies are fundamentally different from existing (mainstream) technologies causing an upheaval in the existing market structure and dominant firms (Christensen, 1997). Disruptive technologies can also cause significant disruptions in the regulatory and legal environment in which they operate. This paper addresses the question about how Congress looks at emerging technologies, particular disruptive emerging technologies. Congress fulfills important functions in the innovation process by setting the overall political and policy framework for innovation, allocating resources for both basic and applied research, supporting new business development, and ensuring that public values are protected as new technologies emerge. Congress must make important decisions about the social implications of research and development and how best science and technology can be used to address or resolve policy issues (Morgan and Peha, 2003). Though Congressional members are not expected to have the required technical expertise themselves, they are nonetheless expected to leverage technical experts in their oversight role. Congressional oversight and the formulation of good policies can help to mitigate the disruptive impact of this technology. Conversely, the failure to comprehend and address the implications of the technology will result in reactive policies with policymakers and public officials trying to grapple with an existing (rather than emerging) technology. There will be little (if any) opportunity to guide the technological development in consideration of public values. Autonomous vehicles (self-driving) are an example of an emerging disruptive technology. Autonomous vehicles are really mobile sensing units, relying on electronic sensors and internet connectivity to interface with the driver, other vehicles, and the transportation infrastructure. These vehicles are expected to improve vehicle fuel efficiency, performance, and safety. Drivers are expected to be much more passive, with relatively little direct engagement in the operation of the vehicle. However, the current legal and regulatory structures are grounded on the assumption of driver engagement and responsibility. The decoupling of driver engagement and vehicle operation has many implications. For instance, does responsibility and liability rest with the driver or the manufacturer if there is an accident? What if the sensors or software fails? What if the driver overrides the technology? This paper will use text data analytics on a 5.5 billion word document collection (corpus) from the U.S. Congress from 1981 to 2014 to investigate how Congress discusses a particular disruptive innovation – autonomous or self-driving vehicles. Text data analytics have become popular in recent years, particularly with the emergence of Big Data Analytics. Text data analytics are actually a number of different techniques, including: discourse analysis, content analysis, natural language processing, grounded theory, and computational and corpus linguistics. Text data analytics using corpus and computational linguistics allows for empirical investigations to include the content and context of text documents. This methodology relies on an understanding of language structure and usage, while using computer assistance in processing and analyzing large document collections. This is a mixed method, combining quantitative and qualitative analysis. Our preliminary results show that autonomous vehicles have not yet been incorporated in a meaningful way into the policy discussions of Congress. Instead, discussions about autonomous vehicles are extremely rare. Thus, Congress appears to be allowing the technology to develop without thoughtful guidance about the disruptions that the development and implementation of autonomous vehicles will have on both transportation regulations and liability. References Christensen, C.M. (1997), The Innovator's Dilemma: The Revolutionary Book that Will Change the Way You Do Business, Boston, MA: Harvard Business School Press. Morgan, M.G., and J.M. Peha (2003), "Analysis, Governance, and the Need for Better Institutional Arrangements," in M.G. Morgan and J.M. Peha (Eds.), Science and Technology advice for Congress, Washington DC: Resources for the Future. pp. 3-22.

09:00
Credibility and Use of Scientific and Technical Information in Environmental Committee Reports of the National Research Council
SPEAKER: unknown

ABSTRACT. ABSTRACT

How important is the use of scientific and technical information (STI) in research policy making? Some believe that STI should be the pre-eminent resource for research policy making while others do not consider STI as more a more credible resource for decision making compared to resources such as expressed political values, perceived self-interest of individuals and groups, or experiential knowledge. This debate is particularly pronounced in the environmental science domain where it has been argued that too much precedence is given to scientific scholarly studies and not enough consideration provided to societal concerns (Sarewitz 2004, Sarewitz and Pielke 2007). The focus of this study is on the use of STI is on National Research Council (NRC) reports in the policy area “natural resource and the environment.” The NRC performs research work for the production of reports on science and technology issues within the National Academies, usually, although not exclusively, for Congress. Despite its long history and important contributions to science and public policy in the United States, surprisingly little research attention is given to the NRC beyond anecdotes from NRC staff about policy processes and organizations or system reviews of science and technology organizations (Boffey 1975, Ellefson 2000, Shapira and Guston 2007). By STI, we mean open scientific and technical literature appearing in peer-reviewed academic journals or proceedings, which is a somewhat narrower definition of STI than can be found in other studies (McClure 1988, Walker and Hurt 1990), albeit with the benefit of being readily operational.

Our method involves analysis of the characteristics of 589 NRC reports published from 2005-2012 of which the largest share (157 or 27%) is in the environmental area. We exclude workshop or narrow or very particular studies (such of those for the Transportation Bureau or in the Health and Safety area). For each of these studies, we collect, information about the study (e.g., size of the report, report policy area), about the committee chair and members (e.g., affiliation with academia, business, government), and about these individuals’ publication history (e.g., whether they had any scholarly publications prior to the report’s publication). Our particular interest is the extent to which the NRC reports include STI in the cited references or footnotes of the report in the environmental area compared with other NRC policy areas. In turn, we examine whether these reports are then conveyed to Congress in briefings, testimony, or references to the report in Congressional documents. We find that NRC environmental reports have the highest share of references that are STI (mean=43%, median=42%) of any of the other policy areas (mean=26%, median=20% for all other policy areas). However, NRC reports with STI are less likely to be referenced in Congressional documents or briefings and testimony than those without STI in the environmental domain. These results lend quantitative support to existing studies that suggest that “sciencing-up” is not sufficient for environmental policy making.

REFERENCES

Boffey, P (1975). The Brain Bank of America: An Inquiry into the Politics of Science. New York: McGraw Hill.

Ellefson, P. V. (2000). “Integrating Science and Policy Development: Case of the National Research Council and US National Policy Focused on Non-federal Forests.” Forest Policy and Economics 1 (1): 81–94.

McClure, C. (1988). The Federal technical report literature: Research needs and issues. Government Information Quarterly 5(1), 27-44.

Sarewitz, D. (2004). How science makes environmental controversies worse. Environmental Science & Policy, 7(5), 385-403.

Sarewitz, D., & Pielke Jr, R. A. (2007). The neglected heart of science policy: reconciling supply of and demand for science. environmental science & policy,10(1), 5-16.

Shapiro, S., Guston, D. (2007). Procedural Control of the Bureaucracy, Peer Review, and Epistemic Drift. Journal of Public Administration Research and Theory 17 (4) (October 1): 535–551.

Walker R. D., Hurt C. D., (1990). Scientific and Technical Literature, Chicago: American Library Association.

ACKNOWLEDGEMENT

US National Science Foundation, Science of Science and Innovation Policy, Award #1262251.

09:15
The shifting mission and associated tensions of NASA in the creation of a Low-Earth Orbit economy: of public agencies and the creation of markets.
SPEAKER: unknown

ABSTRACT. On the 21st November 2013, President Barack Obama signed the National Space Transportation Policy, which called for a greater role of the private sector in Earth to Low-Earth Orbit (LEO) transportation systems as well as new combinations of public-private activities in space transportation infrastructure and services.

Across the board we see an increasingly diverse mix of private actors becoming involved in LEO, from the orbital and sub-orbital transport systems (SpaceX, Orbital, Virgin Galactic, XCOR etc.), exploitation of the LEO platforms like the International Space Station (ISS) for research, technology demonstration and as a launch platform (projects via CASIS and NanoRacks ), companies like PlaneLabs attracting large investment and also the development of technologies for the next generation of human-inhabited in-orbit platforms (e.g. the Bigelow Expandable Activity Module).

With the announcement of the extension of ISS operations until 2024, if one looks at other national projections, the crew of the ISS will be joined by the Chinese Tiandong-2 Space Lab to be launched in 2016, the Chinese space station Tiangdong-3 scheduled for 2022, India’s 2 person vehicle to be tested in LEO in the next years, and other planned activities.

With these developments, Low-Earth Orbit is set to become more diverse, with more players and a variety of public and private partnerships doing research, technology development and providing services and creating and appropriating value in a diverse mix of ways. With increasing pressures on governmental funding, NASA has chosen to explore ways in which it can shift some of the costs to the private sector (particular in LEO) whilst its Exploration Programme focuses on the Moon, Mars and beyond. NASA innovation policy now looks to shaping the LEO economy.

Innovation policy generally has been informed by market failure theory, but for the case of Mission Oriented investments, similar to those of NASA (Mowery, 2010), require a different framework—one that both explicitly takes market creation/shaping into account and aligns with the national innovation policy mix. This is explicit in other U.S. mission-oriented agencies such as NIH (Sampat 2012) and DARPA (Mowery 2012).

Market failure theory assumes a market is there so that all that is needed is ‘de-risking’ and incentivising through tax incentives, or investments in narrow areas defined as ‘public goods’. In reality, the investments that led to Silicon Valley required investments along the entire innovation chain (Mazzucato 2013, Weiss 2014), not only the classic public good area (basic research). They required substantial basic, applied and early stage financing of companies. It is important to note that these substantial investments were/are driven by missions, from going to the moon to solving (today) problems like climate change or the war on cancer. This requires creating a market rather than just fixing an existing one. Do frameworks for innovation policy embody different notions of public good/value/purpose? How does this work for NASA as a mission-oriented agency which has over the past 50 years been set up for technology demonstration and science, and now is attempting to include, in a variety of ways and to different extents, market creation?

This paper builds on on a study conducted by the authors between December 2014 – March 2015 for NASA, who’s evolving and shifting mission to include/exclude commercialisation in its mission since the early days of the Reagan administration, has been brought to the fore once more with the announcement of Barack Obama and the interest by Congress and NASA to create a LEO public-private ecosystem.

The study explores current and potential strategies and approaches to handling market creation in LEO, and to what extent. In particular, this case study looks at the exploration of the utilisation of the International Space Station as a stepping stone to a LEO public-private ecosystem. The case describes (a) what is the mix of activities and actors in the current LEO ecosystem, (b) what sort of economic wealth is being created, (c) who is shaping the direction of the evolving LEO ecosystem and (d) what infrastructures are important for success (with a view to roles public and private actors could play).

The paper also looks to the broader question of mission-oriented public agencies and innovation policy. This particular case of NASA shows a mixture of approaches in handling shifts in mission, which speaks to a broader question of the role of public agencies in creating, stimulating and directing markets.

Contribution: We bring to the conference a case which is fresh, and concerning a key U.S mission oriented agency. It describes the shifts and the emergence of a LEO public-private ecosystem as an attempt at market creation, and draws out lessons for the space research and innovation system and more broadly on industry and innovation policy.

10:30-12:00 Session 13A: Inclusive Innovation #1
10:30
Science, Technology and Innovation (STI) for inclusive development: the analysis of ongoing policy experiments in Latin America.

ABSTRACT. The understanding that science, technology and innovation policy shall explicitly integrate the goal of social inclusion to the well-established competitiveness rationale has gained ground. The scope of the STI policy must be expanded: it is not only about improving competitiveness nor about claiming it’s role on enhancing the overall wellbeing of society through the trickle down effect. STI policy for development requires an explicit move towards the expansion of social inclusion. The recent policy discourse follows the new trend but still uncertain about how to really develop new policy tools that both stimulate innovation and expand social inclusion, what are/could be these policy tools, and what are the evaluation instruments that could indicate whether those higher level goals are being accomplished. The move behind this policy regime transformation is not only regarding the “what” constitutes the new agenda, but also about the “how’s”: how to define the new agenda, how to stimulate innovation in a socially inclusive way, how to evaluate these transformative processes, how to bring about this new policy arena, rationale, and conceptualization. Furthermore, the how does not end there because it also implies bringing about a deep transformation in the way problems and solutions are defined and designed, and the heuristics developed to reach the alternative solutions. It also requires an epistemic shift from disciplinary to interdisciplinary approaches, from a prevailing economic vision linked to the economic-competitiveness approach towards STI policy, to inter-disciplinary and systemic rationales connected to the socially inclusive STI policy approach.

In spite of recent declining trends in the levels of inequality in various countries, Latin America remains as the most unequal region in the world (CEPAL 2014). Explicitly connecting STI policy to social inclusion goals is necessary for strengthening capabilities and learning societies. This paper analyzes some ongoing policy experiments aimed at integrating social inclusion and STI policy in Bolivia, Colombia, Costa Rica, Panama, and Uruguay. The analysis combines the broader macro- and meso-levels of the change of the policy regime with the micro study of the specific dynamics around these policy experiments.

References CEPAL (2014). Panorama Social de America Latina. Santiago de Chile, CEPAL.

10:45
Technology policy for social inclusion: agendas, cultures and governance constraints
SPEAKER: unknown

ABSTRACT. After the big socio-economic crisis of 2001, Argentina’s subsequent governments attempted to redirect public policy strategies towards vulnerable social groups, focusing their action in income redistribution and improving the population’s life quality. In 2014, was approved the project “Access to goods: water for development” (DAPED for its Spanish acronym), as the main project oriented towards social inclusion financed by Ministry of Science, Technology and Productive Innovation. The project was implemented after five years of negotiations between different agencies and public institutions: two R&D institutes, the Instituto de Estudios sobre la Ciencia y Tecnología (Institute of Science and Technology Studies - IESCT, for its Spanish acronym) of the Universidad Nacional de Quilmes, and the Instituto Nacional de Tecnología Agropecuaria (National Institute of Agricultural Technology - INTA, for its Spanish acronym); two governmental Argentinean agencies, the Ministry of Social Development (MDS, for its Spanish acronym) and the Ministry of Science, Technology and Productive Innovation (MINCYT, for its Spanish acronym); and a regional funding agency, the Inter-American Development Bank (BID, for its Spanish acronym). The process was initiated after a proposal generated by the Area of Social Studies of Technology and Innovation of the IESCT, in response to a specific demand of the MINCYT. The central and explicit objective of the Ministry was to implement a new method for technology management (design, implementation, evaluation) to solve social and environmental problems, based on the conceptual approach of Social Technological Systems. The notion of Social Technological Systems problematizes the one-dimensional view of R&D institutions and government agencies, which are then confirmed and stabilized in linear and mono-disciplinary interventions to solve development problems. Instead, Social Technological Systems propose a concept that refers to an alternative approach for the design, development, implementation and management of technology oriented to solve social and environmental problems systemically, generating social and economic dynamics for inclusion and sustainable development. Then it is not about employment issues, access to water or energy, food production or health problems. On the contrary, it allows to analyze –and modify- techno-productive sustainability problems, fundamental right of access to goods, democratic designs; ultimately, local development issues. Therefore, the proposed approach must be systemic. Even when the above mentioned political actors supported and promoted the project within the highest tiers of government agencies, the proposal has suffered design changes along the way. In this trajectory, several modifications and adjustments had to be made in the process of articulation and coordination between different organizational cultures, heterogeneous knowledge and ways of learning of public agencies, as well as to apply the criteria of the Inter-American Development Bank (the international financer). This study is based on an analytical framework that triangulates conceptualizations from the Social Studies of Technology, Policy Analysis and Economics of Technological Change. This approach contributes to explain the processes of social construction of the working and utility of technologies (“successes” or “failures”), construction of problem-solver relations, the generation and stabilization of socio-technical styles, organizational cultures and policy agendas. The main goal of this paper is to analyze the conception and implementation of the DAPED project between 2009 and 2013. And, in this sense, different operations are deployed: • The work explores the strategies and policies of each public institution (IESCT and INTA), the articulation of institutional capacities and learning dynamics of different and heterogeneous work teams such as public Science and Technology institutes and government agencies. • The analysis is based on the experience of the research team of the Area of Social Studies of Technology and Innovation of the IESCT in the process of positioning the DAPED project in the political agenda, assumptions and organizational practices, and the limitations and scopes of inter-agency coordination. In other words, it is an attempt to convert a concrete experience in an input for policy design of technologies for social inclusion. • Finally, the article concludes with learning about public policy design and intervention in the development of technologies for social inclusion.

11:00
Inclusive innovation and multi-regime dynamics: The case of mobile money in Kenya
SPEAKER: Elsie Onsongo

ABSTRACT. Mobile payment systems have seen tremendous growth and penetration in various developing countries. One exemplar is Kenya’s M-Pesa developed to foster the financial inclusion. Scholars have studied M-Pesa and Kenya’s “mobile money revolution” from a variety of perspectives: economic impacts, household patterns of use, regulatory issues, and technical and infrastructural aspects. However, these elements have so far been investigated piecemeal. A comprehensive approach that traces the development of M-Pesa, while analysing related societal processes and event sequences, and highlighting the totality of actors involved in the mobile money revolution has yet to be used. In this paper, I adopt such an approach—the multi-level perspective of sociotechnical transitions (MLP) (Geels, 2004). Following Yin’s (2009) case study approach, I make analytical generalisations on inclusive innovations in developing country-contexts.

Specifically, I explore the following questions: what is the nature of the social setting in which inclusive innovations are developed, deployed and adopted? What are the characteristics of the institutions in these settings? What does ‘inclusion’ mean from the sociotechnical perspective, and how does social inclusion manifest? What transformations are brought about by the embedding of inclusive innovations in these sociotechnical contexts?

Using insights from sociotechnical literature and institutional economics, I characterise settings in which inclusive innovations are deployed as comprising a formal regime and an informal regime. According to Geels (2006), a sociotechnical regime consists of a networks of actors, institutions that guide their interactions, and material and technical elements. Each regime exhibits one or more specific institutional logics that shape the actions and perspectives of the actors within them (Fuenfschilling and Truffer, 2014). A regime therefore is delineated by identifying the ideal type field logics prevailing in a given system, and the configuration of actors around each field logic. In addition, a boundary between two regimes is said to exist, when couplings between constituting elements are denser and stronger within a specific regime than outside (Konrad et al., 2008).

Applying this conceptual perspective to financial services in developing countries, it is evident that the social setting is structured into domains in which either formality or informality as field logics prevail. In my case study, the formal regime is represented by the network of incumbent commercial banks, users running personal bank accounts, special interest groups such as professional bodies, bank lobby groups and policy makers. These actors’ interactions are enabled or constrained by codified rules, and electronic technologies are used pervasively. The informal regime, on the other hand, consists of actors such as informal financial groups and grassroots organisations like rotating savings and credit associations (ROSCAs) and welfare and clan groups (WCGs) and households in low income areas and rural areas. Actors in the informal regime manage their finances through informal practices such as pooling, ‘gift economics’, ‘mattress banking’, table banking, and cash-based systems. I show that, prior to the deployment of M-Pesa, actors in these regimes rarely interacted or shared the same financial management practices.

I then analyse the development and deployment of M-Pesa as an inclusive innovation in this multi-regime context. According to IDRC (2011), inclusive innovations are developed to foster the inclusion of ‘the poor’ or the excluded into the formal economy, or what is considered the ‘mainstream of development’. I argue however that rather than simply including the excluded into the formal regime, inclusive innovations trigger the convergence of the formal regime and the informal regime, and in the process, the creation of a single hybrid regime consisting of a new architecture of actors, institutions and technologies. Social inclusion is achieved when actors previously located in different sociotechnical regimes begin to interact. In addition, the previously disparate institutional logics converge as shared rules are created and negotiated, and technologies are mutually developed and shared. The M-Pesa case illustrates how the development of the innovation involved actors in both the formal and the informal regime. The case also illustrates that on M-Pesa deployment, reconfiguration processes occurred both in the formal and informal regimes. The emerging ‘mobile money ecosystem’ (Kendall et al., 2011) is an exemplar of a hybrid sociotechnical regime that integrates elements of both the formal and the informal regime.

This paper applies a new analytical approach—the MLP and institutional theory—to generate insights on the nature of contexts in which inclusive innovations are deployed, and the nature of the transformation triggered by such deployment.

11:15
Inclusive development dynamics: a case study on STI management bridging scale, sustainability and user participation in biotechnology in Argentina
SPEAKER: unknown

ABSTRACT. OBJECTIVE: This paper aims to analyze how Science, Technology and Innovation (STI) management and policy strategies are deployed towards achieving a high-scale, sustainable, knowledge intensive, and locally grounded project, through the experience of an Argentinean biotechnology-based nutritional supplement delivered in schools to solve child malnutrition. THE PROBLEM: It is currently sustained by an emerging group of scholars that research, development (R&D) and innovation in knowledge intensive technologies can play an important role in the construction of solutions to the problems of poverty and lack of access to basic goods (clean water, nutrition, sanitary services, healthcare, energy, etc.). At the policy level, international agencies have designed specific programs on what has been called “innovation for inclusive development (IID)”. Meanwhile, at a national scale, in the last five years Latin American countries like Argentina have implemented several policies to support capacity building on STI as key elements to promote local development dynamics. In this political agenda, biotechnology is considered a strategic area, given an over 30 year trajectory of growing capacities in public R&D units and private and public-private firms. Nevertheless, it is hard to establish a correlation between these efforts, STI investment and the generation of solutions to the country’s major social problems. Despite the official discourses of the last decade, knowledge intensive projects geared towards social inclusion still occupy only a marginal place. SCENARIO: An analysis of current public policies, show that STI programs and resource mobilization go either to basic research and/or remain more oriented to address windows of opportunity and economic competitiveness than towards social development. Funding instruments, international paper-based evaluation systems, and the characteristics of scientific local culture, constitute incentives that hinder researchers from engaging in agendas based on local issues. Meanwhile, an empirical survey conducted within the present research project of over 40 R&D experiences that explicitly attempt to develop biotechnological solutions to social problems, shows that they remain still minimal, scarcely visible, low scaled, count with fewer resources than mainstream agendas, and do not articulate with local social and productive development agendas. At the same time, a review of IID case studies that pursued the implementation of high-scaled knowledge intensive technologies shows a widespread common framing of “inclusion” as “access to goods” and “the poor” as “consumers”. This vision subordinates local capacities to coping through adaptive strategies in informal settings and precludes empowerment processes that may include broader participation in technology building and decision making. KEY QUESTIONS: In this scenario, STI management and policy making in knowledge intensive technologies for inclusive development present thus several challenges: How to gain scale and, at the same time, empower different groups of actors? How to develop knowledge intensive technologies to solve social problems, and promote user participation in technology development processes? How can social, sanitary and productive policies be articulated with R&D agendas towards social inclusion? ANSWERS FROM CASE STUDY: This work explores these key issues through the case of the “Yogurito Escolar”, probiotic yoghurt designed to prevent respiratory and gastrointestinal diseases, developed by a public R&D institute of Tucumán province, Argentina. In coordination with provincial and national organizations, and manufactured by a small local firm, the yoghurt became a central feature of a public alimentary social program. Moreover, while addressing nutritional and health deficiencies –by delivering the “Yogurito” to children in public primary schools-, the program articulated a strategy for local development through the upgrading of the impoverished small and medium dairy producers. The initiative has been running for over six years within the provincial territory and achieved a process of scaling up, by distributing the probiotic yoghurt to over 200 thousand children. It has also engaged in new projects in order to achieve its sustainability. The present proposal seeks to address within the case of “Yogurito Escolar” the STI management and policy strategies that were attempted to bridge between scaling-up and sustainability strategies –to stimulate far-reaching inclusive development dynamics-, and local empowerment strategies –to assure local adequacy and foster local capacity building-. Through the trajectory of the “Yogurito”, the paper examines learning and innovation strategies in terms of technological design and institutional arrangements that allowed its scaling-up, the articulation of scientific and locally grounded capacities, user participation and the unfolding of a regional development policy scheme.

10:30-12:00 Session 13B: Patent Policy
Chair:
10:30
Evolution of creative imitation under the TRIPS compliant patent regime: Development of biosimilar capabilities in the Indian pharmaceutical sector
SPEAKER: Dinar Kale

ABSTRACT. The impact of TRIPS agreement on the growth and technological capabilities of the Indian pharmaceutical industry has emerged as a critical issue in quest for affordable healthcare for poor populations all over the world. Post 1990 Indian pharmaceutical industry emerged as a cheap supplier of generic drugs all over the world. Indian firms were instrumental in creating affordable healthcare in India as well as other developing countries. Evidence suggests that Indian firms responded to these challenges by targeting small molecules generic markets in advanced countries and exploiting custom research outsourcing opportunities. However the decline in traditional generic markets, acquisition challenges by MNCs, regulatory hurdles by FDA and failures in managing new drug development has created significant challenges of growth and survival for the leading Indian firms. In this context the emergence of biosimilar segment in the global generics market has created new set of opportunities albeit with its own set of challenges. Biosimilars (also known as biogenerics, follow-on-biologics and subsequent entry biologics) are generic versions of biologics - a therapeutic drug category comprising large complex molecules. Even if the formulation and production process is known, biosimilars as generic versions of a biological cannot be identical to a reference biologic. Government agencies around the world are grappling with the issue of defining appropriate frameworks to regulate entry of biosimilars into the market. There is a significant difference between the position of the US and the position of regulatory agencies in the rest of the world (including the EU) adding to the complexity of the problem.

Some Indian firms have made gradual transition towards development of biosimilar capabilities and using case studies of four Indian firms this paper explore underlying learning processes dominating this creative evolution in the Indian pharmaceutical industry. Primary data was collected through interviews with R&D scientists and key managers from these four Indian firms by using a semi-structured questionnaire. Data was triangulated with help of secondary data sources and interviews with industry experts and analysed using analytical framework based on dynamic capabilities approach.

This research highlights that entry into biosimilar sector represents a next phase of creative imitation for the Indian pharmaceutical firms under the TRIPS regime. It demonstrates clear differences between technology acquisition and market entry strategies adopted by firms to move from duplicative imitation to creative imitation revealing firm level experimentation with strategies and influence of path dependencies in shaping them. This paper clearly shows that the role of regulation in shaping evolution of technological trajectories in developing country firms.

In the past the challenge facing firms in the Indian pharmaceutical industry was to make a successful transition from the era of license raj to an era of global competition. This they did and much scholarship in recent years has focussed on what the industry did to achieve this feat. Our paper adds to this growing literature by tracking next phase of evolution in creative imitation strategies adopted by the Indian pharmaceutical firms as a response to disruptive regulatory change.

10:45
International transfer of technology and intellectual property rights: the case of the Brazilian system of innovation

ABSTRACT. Given the new institutional context of the intellectual property rights, we contribute to the discussion of their role in technology transfer from an exploratory data analysis. For this, we start with a brief presentation of the major institutional changes from the 1990’s, in order to characterize the institutional framework, showing that TRIPs had different implications in different innovation systems and there is no consensus about its impact on the attraction of FDI and on international transfer of technology. We then present aggregated data of patent applications and both expenditures and receipts for the use of intellectual property in some countries with different innovation systems. We make evident that most new technological knowledge is produced in mature innovation systems, however, on one hand, these countries pay more than the laggard ones for the use of intellectual property, but on the other, they receive much more than the laggard ones, marking a positive balance in these transactions. Finally, we present the recent Brazilian case on intellectual property and we make evident that the Brazilian Trademark Institute (INPI) grant more patents to foreigners in areas which are considered ‘leading to the future’ with high knowledge intensity than for Brazilians. This finding has implications: non-resident patent reflects not national inventive activities and it obviously does not affect directly the Brazilian innovation system in the way some people expect.

11:00
Intellectual Property Rights and Innovation: MNCs in Pharmaceutical Industry in India after TRIPS

ABSTRACT. The history of India's pharmaceutical industry and the role played by multinational corporations (MNCs) and Indian companies are well known - the industry has developed since the 1970s after India abolished product patents in pharmaceuticals essentially through the efforts of indigenous companies. The MNCs were relegated to the background.

But the environment and situation has changed since then. From 1 January 2005, drug product patent protection has been re-introduced in India to comply with TRIPS agreement of the World Trade Organization.

The principal economic rationale for granting patents is that it will stimulate investment for research for innovation. This is the expected positive effect. But, patent rights which exclude others from producing and marketing the product, lead to inhibition of competition and hence high prices and hence less access. This is the negative effect. Reminiscent of the period before the 1970s, the MNCs have started marketing new imported patented drugs at exorbitant prices particularly for life-threatening diseases such as cancer. In this paper we focus on innovation where the impact is supposed to be positive.

That product patent protection may provide incentives for R&D for innovation is acknowledged in the literature. But if the benefits of technological progress which are supposed to follow from patent protection take place in developed countries, not in developing countries, then developing countries do not gain (though they bear the high price of patented products). Has the behaviour of MNCs changed? Are they contributing to technological progress in the country to justify product patent protection?

Among the ways the MNCs can contribute are by enhancing their R&D efforts; by properly using the patent system for genuine inventions and not for preventing generic entry and by locally working the patents obtained.

The following are the main conclusions of the study:

1. TRIPS and re-introduction of product patent protection in pharmaceuticals has not induced the MNCs to enhance their R&D activities. In fact one observes a deterioration in recent years. In the early 1990s before TRIPS came into effect, these MNCs spent on R&D only about 1% of sales. Since then rather than going up, R&D expenditure as a percentage of sales has actually declined to about 0.3% in 2012-13. In absolute terms too R&D expenditure has started falling recently.

2. MNCs have obtained product patents legally in a large number of cases. But a number of the patent applications have been either rejected or revoked after being granted. Some of the patents which have been denied recently include: imatinib mesylate (brand name Glivec of Novartis); gefitinib (Iressa of AstraZeneca); peginterferon alfa-2a (Pegasys of Roche); Formoterol and mometasone aerosol suspension (Merck); bimatoprost and timolol (Ganfort of Allergan). The main grounds for which the patents have been denied are that the patented inventions are "obvious" (Section 2 (1)(j and ja)) and/or that these do not show enhanced efficacy (Section 3(d)). Denial of patents under these grounds does demonstrate that the MNCs have not restricted their claims to genuine invention. They have tried to take advantage of the patent system and tried to obtain patents even when they are not eligible to get these patents under the Indian law. Several patent cases are discussed to argue that MNCs have been aggressively asserting their patent rights not for getting genuine patents which they are entitled to but for preventing generic competition.

3. The reason why patents are granted is not only that it will stimulate R&D for innovation. The expectation is that disclosing of inventions in patent applications and working of patents will lead to diffusion of technology and facilitate further progress. Under Section 146 of the Patents Act, 1970, patentees are required to furnish to the Controller "the extent to which the patented invention has been worked on a commercial scale in India". Out of the 1115 patented products for which information were available for 16 MNCs, only 140 were commercially worked, i.e., were marketed in India (12.6%). Again out of the 140 patented products worked in India, information about whether these were manufactured in India or not were available for only 92 products. Only 4 of these were manufactured in India including one which involves packaging of bulk imports. The remaining 88 patented products were being imported and marketed in India. If patented drugs are imported rather than being manufactured, the country does not gain technologically but pays the price of higher costs of monopoly drugs and this seriously questions the propriety of having product patents in that case.

11:15
Individual versus Institutional Ownership of University-discovered Inventions
SPEAKER: unknown

ABSTRACT. Intellectual property (IP) policies are among the most powerful instruments shaping the incentives that drive the discovery and commercialization of knowledge. For U.S. academic institutions the Bayh-Dole Act of 1980 is perhaps the most influential and far-reaching of these IP policies. The legislation facilitated private institutional ownership of inventions discovered by researchers who were supported by federal funds. Many observers credit the Bayh-Dole Act with spurring university patenting and licensing that, in turn, stimulated innovation and entrepreneurship (The Economist 2002; OECD 2003; Stevens 2004). Based on this perceived success, the Bayh-Dole Act has become a model of university IP policy that is being debated and emulated in many countries around the world including Germany, Denmark, Japan, China, and others (OECD 2003; Mowery and Sampat 2005; So et al. 2008).

The key component of the Bayh-Dole model is granting the university, not the inventor, ownership rights to patentable inventions discovered using public research funds (Crespi et al. 2006; Geuna and Nesta 2006; Kenney and Patton 2009). However, the incentive effects on academic inventors of university versus individual ownership are not well understood. In a theoretical contribution, Hellmann (2007) found that university ownership is efficient when inventors must search for a commercial partner as long as the cost of search is higher for inventors than for the university. Using survey and case study evidence, Litan et al. (2007) and Kenney and Patton (2009) argued that conflicting objectives and excessive bureaucracy make institutional ownership ineffective and suggested an individual ownership system may be superior. Due to a paucity of evidence, however, the U.S. National Research Council recently concluded that “arguments for superiority of an inventor-driven system of technology transfer are largely conjectural” (NRC 2010).

Our analysis uses the framework of Pakes and Griliches (1984) and a quasi-experimental research design to provide the first systematic evidence on how ownership of intellectual property rights impacts patenting of university-discovered inventions. We examine a fundamental change in German patent law from individual to institutional ownership. Prior to 2002, university professors and researchers had exclusive intellectual property rights to their inventions. This “Professor’s Privilege” allowed university researchers to decide whether or not to patent and how to commercialize their discoveries, even if the underlying research was supported by public funds. After 2002, universities were granted the intellectual property rights to all inventions made by their employees and this shifted the decision to patent from the researchers to the universities. The policy goal was to increase patenting of university-invented technologies which is often used as a surrogate indicator of successful university technology transfer.

By changing the agent who makes the patenting decision, the abolishment of Professor’s Privilege caused a “regime shift” that substituted institutional benefit and cost schedules for those of the individual inventors. The net effect on the volume of patenting depends primarily on the relative costs between the regimes. To identify how the regime shift affected patenting, we exploit the researcher-level exogeneity of the 2002 abolishment of Professor’s Privilege along with the institutional structure of the German research system in which universities and other public research organizations (PROs) coexist. PRO researchers were not affected by the ownership change and serve as a control group. We use a difference-in-difference methodology and control for the arrival of new patentable discoveries using publications and peer-to-peer matching.

Our analysis shows that fewer university inventions were patented following the 2002 regime shift. For a given discovery, the schedule of benefits to institutional owners, who are the post-change patent decision makers, is lower because the university became an additional party in the negotiations over the split of expected revenues. This partly explains why fewer inventions qualified for patent protection following the regime shift. However, the effect on expected revenues can be offset if institutional costs (broadly conceived) are sufficiently lower than those faced by individual researchers (Hellmann 2007). Our results show that institutional patenting costs were lower for the subset of university inventors who did not have relationships with industry partners prior to the policy change. For those individuals, patenting increased. But, the data also show that most German patenting professors had prior industry relationships. Post-change institutional costs were not low enough to offset the revenue effect for this group.

10:30-12:00 Session 13C: Emerging Technology #2
10:30
EI2 – Emergence Indicator -- Second Order
SPEAKER: unknown

ABSTRACT. We are developing a family of emergence indicators. Our base proposition is to take a set of research publication or patent abstract records – either topical (e.g., covering graphene), or universal for a given data source (e.g., all EPO patents over a 15-year period). From this dataset we identify a set of key terms and refine those. We define a base period; then track activity over successive periods. Using a custom emergent terms script in VantagePoint software, we tag terms that meet our emergence criteria.

Historically, we used a binary approach. Given a base set, with basic term clean-up and consolidation, we would look for the presence of new terms or phrases by tracking items by year. If an item was “new” and did not appear in the base set, the item was of interest to us. We would refine the process further by establishing frequency cutoffs. A new item must appear X number of times in the monitoring period to qualify as emergent. This technique was blunt but we could easily run the process on 100,000’s of items.

This current work extends the earlier process by requiring not just the presence of a term and sufficient frequency, but the increase of the usage of the term must also fit with a growth model. We have found that logistic curves (Roper et al., 2011) generally work well when representing the R&D document behavior of many emergent technology spaces [but not all; witness the extended near-exponential growth of semi-conductor capabilities, as modeled by Moore’s Law]. These S-shaped curves suit Rogers (1962) early characterization of innovation processes

We extrapolate that individual terms that define an emergent technology would also exhibit similar behavior. So within our new model, we track the individual growth over time of each candidate term. If the term exhibits behavior that fits within our growth model, the new term is emergent and flagged by the script. If a term does not fit with our growth model, the term is not flagged, even if the term is new to the document set. Although computationally more expensive, the process can still be effectively run on large document sets and significantly reduces the number of terms flagged as emergent, as compared to our early process.

However, our past experiences using the appearance of new terms suggest that single terms or phrases are problematic as a direct emergence indicator. Variability in terminology, particularly as a new technology is beginning to coalesce, means that a single term or phrase is an unreliable indicator. Terms can also appear quite quickly in the literature, giving little data to curve fit. Rather than key on this direct emergence measure, we see notable potential in using trend patterns to devise a “second order” indicator. That is, we identify countries, organizations, researcher teams, and/or individual researchers whose R&D activities stand forth based on the prevalence of those emergent terms in their work. We suggest this usage is robust and offers high potential intelligence value for R&D management and policy.

In the presentation we will offer case examples, comparing the emergence indicator behavior.

References

Rogers, Everett (1962). Diffusion of Innovations. Glencoe: Free Press.

Roper, A.T., Cunningham, S.W., Porter, A.L., Mason, T.W., Rossini, F.A., and Banks, J., Forecasting and Management of Technology, 2d edition, New York: John Wiley, 2011.

[NOTE: Paper intended for Rotolo's session on Defining and operationalizing emerging technology.]

10:45
The Internet of Things and the Economics of Innovation

ABSTRACT. There exist many different definitions for the umbrella term of the Internet of Things, however there seems to have a consensus on the fact that it will use a wide variety of technologies and that it will include smart objects and machines that will not necessitate direct human interaction. The IoT would be global smart infrastructure that will connect machines, organizations, homes, vehicles, processes, and people to intelligent networks. This paper will focus on this bigger picture of the IoT and will analyze its present situation at the macro level and will propose that in order to fully deploy a global Internet of Things (IoT), nations will need government mission-oriented interventions that go beyond the typical market failure approach, and “picking the winner” argument. Governments should invest to create markets, ensure proper collaboration within the field, establish trust with the population, and finally guarantee returns in its investment for future needs in innovation. In the end, all these actions will empower firms and national economies and ensure proper development for the IoT. We propose the public interventions towards IoT will ignite the entire private market rather than only the usual digital magnates. There is a need to foster a strong national innovative IoT infrastructure to welcome future technological advancements such as autonomous vehicles, biochips, smart robots, and last but not least, artificial intelligence (AI). This paper also opens up on the fact that governments and firms should approach and appreciate the present technological revolution challenges as a ‘test’, or ‘experience’, to develop dynamic interventions that will control and regulate the opportunities and real potential threats posed by the not-so-far artificial intelligence.

In this paper, I will explore the theme of the Internet of Things within the conceptual framework of economics of innovation. I will argue that the IoT is facing an imminent and complex situation, which already extends beyond the problem of standardization, security and privacy. As of 2015, more than 4 400 businesses involved in more than 50 consortiums, associations or alliances IoT related try to lead and influence the development of the IoT with various goals and objectives. Even though this convergence towards private sector collaboration represents an increasing international collaboration, it constitutes a significant source of concerns when we realize that the public has been excluded from the global discussion on the Internet of Things. These concerns range from the impact of the public acceptance and the pursuit of economic growth to clear solutions for sustainable development, to the disastrous effects of discrepancies between nations and people, and their social and environmental consequences. The paper will discuss that such a broad and technological revolution will have profound impact on societal development that can be disastrous economically and socially if the bigger picture of the IoT is not well understood. I will argue that governments are the missing link and they need to be proactive and become agent of development and market creators. Their role goes beyond the typical argumentation of governments need to address market failures or stop picking the winner.

The paper will be able to establish the validity of this hypothesis by: (1) exploring broad definitions of the IoT, its challenges and opportunities; (2) analyzing the dynamics at play and the hype in the international development of the IoT; and (3) comparing the national competitive advantages in innovation and the field of IoT in Canada, China, USA, France, United Kingdom, Sweden and the European Union as a whole. It will conclude with the formulations of various lines of approach for governments to cope with the development of the Internet of Things.

11:00
A framework for building a search term strategy in an emerging technology: The case of synthetic biology
SPEAKER: unknown

ABSTRACT. The concept of “emerging technology” has been associated with radical novelty steep upward growth trajectories, and extensive societal and economic impact (Rotolo et al. 2015). A central feature is change; yet regardless of whether this change is evolutionary or disruptive, there is often ambiguity and vagueness in defining the technology. This makes it difficult for those who wish to develop operational characterizations, applying measurement approaches, or undertake foresight activities. Several factors influence this uncertainty. There is inherently evolution in the body of knowledge and set of techniques that underpin emerging technologies, accompanied by change in scientific and technological terms (Arora et. al. 2013). Yet, scientists who are active in an emerging field may say that what they are doing today is not different from what they did a year ago (Khushf 2004). At some point in time, often associated with the rise of government R&D funding (Shapira & Wang 2010), an emerging technology becomes visible, and scientists identify with it, which potentially makes it easier at that moment to assess what research and innovation is occurring. Particular challenges arise in earlier phases when field development is more fragmented (and government funding is less overt). This is the stage when it is most useful to identify potential emerging technologies and their possible trajectories, including from a bibliometric perspective, but also when it is most difficult to do so.

Not surprisingly, this phase leads to a diversity of measurement approaches. Keyword-based strategies, particularly in early periods of emergence, can use simple variations of the most prominent “name” of the technology. But these simplistic approaches may be too narrow, leaving out key subgroups that might be in a different sub-discipline but which have the potential to make major contributions to the emerging technology. More sophisticated and broad-based approaches that extend beyond a handful of keywords alternatively might be too inclusive in capturing the work of scientists who are uncomfortable with being associated with and indeed directly using the prominent name of the technology. This juxtaposition reflects the eponymous duality of recall versus precision. Other approaches based on the identification of leading scientists and those who cite their work may be tried (Zitt & Bassecoulard 2006). But given the changing set of scientists typically associated with an emerging technologies, especially new interdisciplinary domains which attract new combinations of researchers, this approach also has its weaknesses. Some have pointed to the prominent role of instruments in demarcating the emergence of a technology (Rafols & Meyer 2010); however, this method necessitates good information about the instruments used in the emerging technology and indeed the extent to which the emerging technology is centered on such instruments. Events such as workshops that bring together participants for the purpose of defining the emerging technology can also be used to fix a starting point, as can the introduction of key government funding programs, but it is not uncommon to see the subsequent work end when the funding does (Chase et al. 2012).

How does one navigate these challenges in defining and measuring an emerging technology, at least in a bibliometric sense, in its early stages of emergence? The aim of this paper is to examine two approaches to identify the emergence of a technology based on a case study of synthetic biology. The current emerging domain of synthetic biology draws, in part, on a legacy that extends back to the human genome project of the early 1990s, giving it an evolutionary quality. Indeed, epistemic distinctions between synthetic biology and other subfields such as systems biology have been debated (O'Malley et al. 2007; Calvert 2008).

The paper builds a framework that compares the two early bibliometric search strategies (Oldham et al. 2012, van Doren, et al. 2013), and then suggests a way forward that offers a more promising balance of precision and recall to bibliometrically define and measure synthetic biology. The paper uses noise ratios to individually assess the ability of the prior search strategies to distinguish synthetic biology and supplements these terms through the application of a natural language processing-based keyword network.

The results indicate that, although synthetic biology is in a relatively early stage of emergence, a set of keywords that define it can be identified and compared, and that there is an optimal set that is positioned between simplistic approaches and more extensive strategies. Although each emerging technology requires customization (and iteration) in the design of identification strategies, we draw out principles of design, operationalization, and validation from this study that will offer insights to efforts to refine search strategies in other emerging domains.

10:30-12:00 Session 13D: Science Careers and Gender
10:30
The Academic Advantage: Gender disparities in patenting
SPEAKER: unknown

ABSTRACT. Innovation is critical to economic development and depends upon the full participation of the scientific workforce. Yet, the field of “innovation studies” demonstrates that there are many disparities in the exploitation of human capacity for innovation. Two well-noted areas are the dearth of academic and female innovators. The response to this lack of innovation in the academic sector has been to stress academic entrepreneurship, which encompasses the varied ways in which faculty at educational institutions engage in innovative and high risk activities which have the potential for financial rewards for the individual or the institution with which they are affiliated. This is most typically operationalized as commercialization of science activities such as patenting, which was heavily promoted following the enactment of the Bayh-Dole Act in 1980 in the United States. Historical studies have shown that the rate of female patenting from 1637 to the mid-20th century failed to exceed 2% of total patenting. Contemporary studies suggest that women may continue to be underrepresented; however, studies on rates of female patenting are largely monodisciplinary, localized, and lack explicit connections to the types of settings where the patenting is conducted.

This study provides a comprehensive analysis of 4.6 million utility patents issued between 1976 and 2013 by the United States Patent and Trade Office (USPTO), taking into account the type of assignee affiliations as well as inventor gender. This includes 10.8 million inventors and 4.2 million assignees.

Women contributed less than 8% of all inventorships for the entire period (1976-2013) and contributed 10.8% in the most recent year (2013); an increase from 2.7% in 1976. Women are the minority in every technological area; however, significant differences can be seen by setting, with patents owned by universities demonstrating the highest mean female participation (at 11%, compared to less than 8% in firms). Difference has been increasing over time: while in 1976, females accounted for about 2-3% of both industry and university’s patents, the difference between the two sectors became more apparent in the early 1990s. By 2013, women inventorships accounted for 18% of university-owned patents, compared to 10% for industries and 12% for individuals. Higher proportions of academic female inventorships are apparent across all macro technological areas, with the exception of “Other fields”, where patents owned by individuals have a higher share of female inventorships, mainly due to technological areas such as furniture, games, and other consumer goods.

The gender gap in terms of scientific impact has been well-documented. However, little is known about the technological impact gap, i.e., the number of times women’s patents are cited in other patents compared to that of men. Such impact is always lower for women’s patents, irrespective of the type of assignee. However, this gap is narrower for patents owned by firms, and larger for patents owned by the university sector, which suggests that the narrow gap in female participation in academic patents does not translate into a smaller technological impact gap. It has been suggested that, when female inventors are involved, patents tend to have higher diversity in terms of the number of International Patent Classification (IPC) codes assigned. This holds true in our data: female inventors were associated with more IPC codes, irrespective of the type of assignee, suggesting higher interdisciplinarity in female patenting.

The results demonstrate that women’s patenting remains lower than would be predicted given their representation in science, technology, engineering, and mathematics fields and professions and their authorship of scientific papers. Furthermore, our study suggests that academic environments may be more conducive for female patenting than corporate or government organizations. The higher degree of female patenting in academe may be due to the less hierarchical organization of academic institutions, which have been shown necessary for building social networks—critical in fostering commercialization activity. The lack of a strong social network has been repeatedly cited as a main reason for the suppressed commercialization activities of women. One way in which universities have responded to this is the creation of Technology Transfer Offices (TTOs), which were established to meet the demands of the Bayh-Dole Act in promoting the commercial exploitation of inventions that result from government-funded research. Strong TTOs have been suggested as another approach to fostering academic entrepreneurship through organizational support and facilitating the construction of collegial networks. However, the degree to which these are providing advantages for underrepresented innovators and taking into account the differential needs of some of these populations is yet unknown.

10:45
A close look at the work-family conflict in science: Evidence from a scientist survey
SPEAKER: unknown

ABSTRACT. This paper aims to investigating the in-depth impacts of family on scientific productivity. The majority of current literature addressed largely on the family-work tension of female scientists or of the gender differences (Mary F. Fox, 2001, 2005, 2008; Mary F. Fox & Mohapatra, 2007). Notably, male scientists’ family life and its impact on their scientific performance is lacking in the literature of gender and science. Male scientists are fighting in the academic pipeline as well. As a recent study revealed, female and male faculties in science are equally dissatisfied (17% female faculties vs. 16% of male faculties) with their life (Ecklund & Lincoln, 2011). In addition, both male and female scientists reported that family somewhat interfered with their work and so does the other way around (Fox, Fonseca, & Bao, 2011), suggesting that work-family conflict is more than a woman’s problem.

While scientific work is becoming competitive, collaborative and large-scale, scientists are expected to produce constantly. Although scientists in academic jobs has the advantage of flexibility in terms of making their own schedule and some level of control of their research scope, studies showed that scientists are stressful and tend to work long hours (Jacobs & Winslow, 2004; Wang et al., 2012). The stress of surviving in academia is high and stiff.

Empirical studies found that the work-family conflict among scientists and engineers, operationalized as work-to-family inference and family-to-work interference, is significant for both men and women. In particular, marriage and young children had strong negative impact on man’s family-to-work interference, especially those work in the private universities (Fox et al., 2011). Based on Fox and her colleagues’ interesting results, we inquire if the family-work interference exists, how does it affect scientific work? Put differently, this paper proposes to explain the mechanisms of the work-family conflict on scientific productivity.

Thus, the main research question asks in this paper is “Does marriage status and number of children affect collaboration and scientific productivity?” Testable hypotheses are listed below. Hypothesis 1: Family has negative impact on collaboration and scientific productivity. Hypothesis 2: The negative impact of family on collaboration and scientific productivity is larger for untenured scholars than tenured scholars. Hypothesis 3: The negative effect of family on collaboration and scientific productivity is smaller for scholars at privilege research universities than those at less-privilege research universities.

To test above research hypotheses, it requires information from a large sample of scientists spanning fields and institution types. This paper employs survey analysis, based on data collected from a survey of scientists in the US (collected by the PI and Professor John Walsh, 2010). The survey sample consisted of 2327 valid responses (26% response rate).

Despite the disproportionate distribution between male and female scientists, our preliminary results show that male scientists have more co-authors than female scientists. Male scientists at the lower rank (assistant professor or associate professor) can produce more publications out of the project they involved than female scientists at the same rank. Science is still a highly male-dominated world (Ceci & Williams, 2011), which is also true in our empirical data. The work-family conflict is likely to interfere with man’s scientific life. With the unique dataset of scientists nationwide in the US, findings of this paper can provide insights of the conflict to what extent does it interfere the process of scientific work and scientific outputs.

11:00
Women and STEM entrepreneurship

ABSTRACT. Academic literature and popular press often lament the dearth of women in STEM fields, particularly in startups. The recent surveys of Google, Yahoo and Facebook employees show that only 15-17% of all tech workers are women. Estimates of women entrepreneurs in high technology fields range from 1-2% in the UK and US (GEDI Gender Index) to 15% in Europe (ECDGEI 2008; Marlow & McAdams 2012). As entrepreneurship is one of the drivers of a country’s economic health and technological advancement, and more and more women graduating with STEM related degrees, women technology entrepreneurs remain a potentially overlooked resource. This is of particular importance as initiatives around the world seek to support women in STEM careers. A better understanding of the backgrounds, career paths, and start-up trends of successful women who start STEM firms is needed.

Work regarding women entrepreneurs has spanned a wide range of topics from entrepreneurial aspirations to firm performance (see Jennings and Brush 2013). Research regarding women entrepreneurs in technology fields has primarily looked at the likelihood of starting an academic spin-off. In this setting, fewer women are active in related careers and the social and professional norms are less supportive of women entrepreneurship (Rosa & Dawson, 2006). However, little work has examined the relative performance of technology firms founded by women entrepreneurs. Furthermore, the findings about the success of female led technology firms are inconsistent, showing both underperformance and equal performance, depending on the measure of performance. Thus, we do not have a rigorous empirical understanding of the women who do found successful technology companies.

This study examines three questions: 1) Do the founding teams of STEM firms have similar characteristics and trends seen in other fields with regard to gender? 2) Does the gender of STEM firm founders influence the firm’s performance? 3) If so, what are the trends in career trajectory or background that can explain a) women’s participation in founding high technology start-ups and b) differences in firm performance?

Methods: The study uses a database of all nanotechnology firms started before 2008. Nanotechnology firms provide a uniquely appropriate setting for this study since these firms are involved in each facet of STEM. Over 10 industries are represented in the sample. Data are obtained from industry lists, directories, press releases, publications, and web sites related to nanotechnology. Each firm was analyzed to determine if it was a single-business ventures founded to develop, produce, and sell nanotechnology products on the merchant market. Specifically, included firms must have more than 50 percent of their activities, such as products, R&D, or sales, derived from or related to nanotechnology. Nanotechnology-related activities were identified by firms’ product, patent, and technology data.

Here, descriptive statistics show the trends in high-technology entrepreneurship. Event history analyses are also used to examine the relationship between gender and firm outcomes.

Findings Preliminary analyses are being conducted on the data. Table 1 summarizes some of the initial descriptive statistics (taken out here). About 14% of nanotechnology firms started before 2008 had women entrepreneurs on the founding team. Of these, 12% were founded by solo woman entrepreneurs, or 2% of the sample. Women tended to found firms with men (86%), but not other women. Only 1.5% of all firms were started with more than one woman on the founding team. In other words, over 98% of these firms had men on the founding team. Additionally, 43% of the all firms were started by solo male entrepreneurs, or almost half of the firms founded by men.

The closure rate for firms founded by women was much lower than the other firms at 24% compared to 37%, respectively. Additional data are being collected regarding other outcomes such as acquisitions, bankruptcies, and fire sales. In terms of backgrounds, preliminary analyses show that over 80% of the women founders previously worked at universities or high technology firms and 11% were serial entrepreneurs.

Conclusion Although the findings are preliminary, they hold powerful insights into women entrepreneurs in STEM fields. Findings show that while a minority of nanotechnology firms has a woman on the founding team, almost all had men on the founding team. In line with previous research, 88% of the firms founded by women had more than one founder compared to 51% of firms with only male founders, indicating that women are more likely to collaborate during start-up. However, women do not collaborate with other women. Firms started by women were less likely to close than other firms. Further analyses will surely provide additional insight.

11:15
Influence and Inclusion: Sex homophily in academic STEM networks

ABSTRACT. At universities, department climate is fundamental for faculty productivity and satisfaction. Relationships with colleagues in particular may provide faculty members with the necessary support to thrive in their professional careers (Bilimoria et al., 2006). Inclusion within the department community translates into access to an important network of support for dealing with professional and work-life related issues. Moreover, recognition by colleagues can result in power and influence within the department and in the field, thus fostering self-esteem and motivation.

In science, technology, engineering and mathematics (STEM) disciplines, the sex-related fragmentation of departments has often led to marginalization and isolation of women faculty members. Women scientists face the challenge of integrating into male-dominated work environments and gaining access to networks of power, support, and resources. Several policies at the national and university level have aimed to directly alter this “chilly climate” within STEM departments, leveraging integration among faculty members and equal opportunities for women scientists. Current trends indicate that the number of women scientists at U.S. universities is steadily increasing (NSF, 2013) and research shows that women are increasingly integrating into faculty networks (Ceci et al., 2014; Feeney & Bernal, 2010).

However, there is little empirical evidence as to whether this increased presence of women in STEM departments and STEM networks has resulted in a more inclusive climate for faculty members. While researchers have focused on sex as a significant variable to explain the climate of marginalization and isolation, few studies have investigated what diverse networks mean for organizational outcomes and scientists’ success.  

Networks are the mechanisms through which scientists are able to access to the symbolic, information, and material resources that are necessary to gain influence within STEM departments. And networks provide an invaluable source of support and advice to scientists – for instance, through friendship ties - that may enhance a scientist’s sense of inclusion (Coleman, 2010).  Because the sex composition of a group (e.g. predominance of same-sex or cross-sex relationships) strongly influences psychological and social perceptions among co-workers, sex-homophily and heterophily are expected to differently affect men and women scientists (Callister, 2006; Ibarra, 1992). This research investigates the following questions: are sex-heterophily ties advantageous for women and men scientists? How do scientists perceive sex-heterophily ties? How is the composition of networks related to the perceptions of influence and inclusion for both men and women scientists?

 To test our hypotheses, we use data from the “Women in Science and Engineering II: Breaking Through The Reputational Ceiling: Professional Networks As A Determinant Of Advancement, Mobility, And Career Outcomes For Women And Minorities In Stem” (NETWISE II) survey, an NSF-funded study on professional networks for women and minorities in STEM, conducted in 2011 (CO-PIs: Julia Melkers, Eric Welch, Monica Gaughan). The data consist of ego-centric networks of 9,925 U.S. scientists at higher education institutions in four fields: biology, biochemistry, engineering, and mathematics. The results of this research will inform the literatures on sex diversity in STEM networks and university and federal policies aimed at integrating women in STEM fields. 

10:30-12:00 Session 13E: Mapping and Measuring Science
10:30
Scientometrics for Portfolio Assessment at the U.S. Department of Agriculture’s National Institute of Food and Agriculture
SPEAKER: unknown

ABSTRACT. The National Institute of Food and Agriculture (NIFA) is the primary extramural science funding arm of the U.S. Department of Agriculture (USDA). NIFA is comprised of four institutes that address: 1) plant and animal production systems; 2) human nutrition and food safety; 3) natural resources and environment; and 4) youth, families, and community. The Institute of Food Production and Sustainability (IFPS) considers all aspects of agricultural production.

NIFA’s portfolio assessments aim to evaluate programs within each Institute. The process involves assessing: the state of agricultural science, the Institute’s science programs, funding mechanisms, stakeholder input, priority setting, staff and resources, and overall performance. Previous assessments involved ad hoc accounting of activities and performance. A key recommendation of previous external reviews was that NIFA needed to develop direct, measurable research outputs, productivity tracking, additional effort toward assessment, and methods of impact assessment. The approaches presented in this paper represent our attempt to advance NIFA’s portfolio assessment process. The objectives of our portfolio analysis were to: 1) characterize the portfolio of IFPS science for the past 5 years, 2) document selected outputs of funded projects, and 3) define benchmarks and indicators to assess future progress.

Modeling IFPS-supported science projects. To ascertain the current structure of IFPS science, we modeled the text of 4719 projects by visualizing the data derived with topic modeling (VOSviewer; Pushgraph, Chalklabs). Topic modeling allowed the data (i.e., project descriptions) to define programs and created visualizations of meaningful categories from text, independent of keywords or preconceived categorical designations. We developed two-dimensional visual outputs, or topic maps, in which terms were clustered based on their overall topic- and word-based similarity to one another. Terms were located such that the distance between two terms gave an indication of the relatedness of the terms.

Descriptions derived from topic modeling validated preconceived programmatic categorizations, but also elucidated other important relationships. For example, animal disease, genetics, and reproduction correlated with the animal science unit’s programmatic structure, but we were also able to see how animal science contributed directly to food safety programs in another Institute. Also, the loose connection between animal health and genomics represented a potential gap and opportunity to integrate those areas for greater impact. Using visualizations along with the knowledge and experience of our program staff, we identified scientific opportunities and areas for future research.

Analyzing stakeholder comments. Unlike staff simply reading stakeholders’ comments, topic modeling provided a synthetic, comprehensive, and objective analysis of stakeholder input that not only indicated major stakeholder interest but also how those interests were related to one another. This analysis allowed us to look at input in an organic way, as it was provided in the stakeholders’ own words, rather than by assigning comments to a particular pre-designated program, division, or institute.

Assessing integration of IFPS science. Network analysis (Sci2, Indiana University) provided a means to analyze IFPS-supported projects and their relationships as defined by our science classification systems called Knowledge Areas (KAs). We created a network map of currently supported project KAs to determine relationships to one another and to identify most relevant KAs, those used most often.

Correlating IFPS scientific expertise and supported science fields. The areas of science supported by IFPS were presented on a map of science (UCSD Map of Science, Sci2) and contrasted with IFPS staff expertise on the same map. Direct comparison of funded science with staff expertise elucidated areas of staff expertise shortage and directly influenced hiring decisions.

Examining funding mechanisms. Extramural research is supported primarily through two statutorily defined mechanisms. Capacity funds are administered through a formula process. Competitive funding is awarded based on peer review. Capacity funds have been viewed as providing underlying support that enabled Land Grant Universities to pursue advanced research efforts supported by competitive awards. Preliminary results of our analysis using a knowledge discovery tool (Pushgraph, Chalklabs) suggested that capacity-funded projects have played an important role connecting the “science space” between what appeared to be unconnected competitively funded projects.

In summary, scientometrics are now playing an important role in program management at NIFA. Applications span the realm of science programs, staff, stakeholders, and funding mechanisms.

10:45
Mapping Engineering & Development Excellence in the UK
SPEAKER: unknown

ABSTRACT. The Research Excellence Framework 2014 (REF2014) is a UK government exercise for assessing research quality in university departments. In REF2014 154 universities made 1,911 submissions to expert panels across 36 subject areas in the sciences, social sciences and humanities. The UK holds research exercises such as REF2014 every 6-8 years. REF2014 was different in that for the first time the impact of research on users, not just other researchers, was assessed through case studies that were scored on their ‘reach and significance’. As development studies scholars who study science and technology, and who have backgrounds either as engineers or as having worked in engineering teams, we have for some time been interested in the role of engineering in development, and specifically the way this has been evolving in recent years. REF2014, in which we also participated in our own subject area, Anthropology & Development Studies, seemed an ideal opportunity to map the status of engineering & development across the UK research spectrum, and to flesh out the conditions for producing excellent research and practice in this field. In REF2014 6,975 impact case studies were submitted. We reviewed each of these case studies to assess 1) whether it had a developing country focus, either in terms of its content or research partners, 2) whether it could broadly fit, or relate to five branches of engineering (systems, mechanical, electrical, civil and chemical). From this initial review we identified 83 cases (N.B. doesn’t include panels 7-13 and 15, the number will likely rise substantially). We then reviewed these cases in detail looking at 1) What kinds of engineering & development are being done, 2) Who is engaged in engineering & development, 3) What are their networks, 4) Who are their collaborators, 5) What is the direction of technical flow – South to North or North to South, and 6) What makes excellent engineering & development? Our initial findings are that 1) there is an emergent systems, interdisciplinary and stakeholder character to UK engineering & development, 2) this is being supported by research funding mechanisms (such as RCUK impact requirements and specific programmes supportive of developing country research, including BBSRC, EPSRC and ESRC/DfID, RELU and ESPA). 3) The areas of highest concentration of engineering & development research, aside from in engineering groups, are in biology, agriculture, clinical medicine, architecture, planning & the built environment, and geography & environmental studies. 4) There is almost no engineering & development research occurring across large areas of the humanities & social sciences, including in some areas where these would normally be expected, such as Art & Design, and Media, Library & Information Management. An initial mapping of engineering & development excellence in the UK points to the increasing fluidity of disciplinary boundaries and concomitant ambiguity consistent with researchers working across areas of specialisation to address the growing systemic challenges of development and the environment in North and South.

11:00
Testing technology-industry concordance schema using linked micro-level data on patents and firms
SPEAKER: unknown

ABSTRACT. Empirical economic analysis of technological change often turns to patent data for measuring inventive activity as an input into economic production. In linking patent and economic data, researchers often face a tradeoff between coverage and detail: country-level data often provide broad coverage but are not suitable for learning about how the industry-level dynamics of knowledge spillovers, a level at which the tension between cooperative and competitive R&D efforts are likely to be most salient. Meanwhile, firm-level ‘micro-data’ provide rich detail on how firm structure relates to inventive activity as revealed through patent databases. But such micro-data often suffer from limited coverage in time and space and thus threaten selection bias when employed for large-scale policy analysis. A number of attempts have therefore been made at ‘meso-scale’ linking of patent and economic data. These efforts usually take the form of ‘technology-industry concordances,’ where official industry classifications in economic data are matched to technology classifications in patent data. This facilitates using patent statistics aggregated to the industry level and then associating them specific industrial sectors. While a number of such concordances have been developed, we are not aware of any of them having been tested and validated with micro-level data. This paper will provide such a test, using a large, global sample of firm-patent data.

One of the most well-known technology-industry concordances is the Yale Technology Concordance (YTC) (Kortum & Putnam 1997). The YTC system probabilistically matches technologies to industries using a sample of Canadian patents issued between 1978 and 1993, which patent examiners manually classified to industry (using the Canadian Industrial Classification or cSIC). However, a significant drawback of the YTC is the fact that it is based on a sample of patents limited in time and space, and hence is unlikely to effectively represent the dynamically evolving technological landscapes of contemporary industries (much has changed in many industries since 1993). To address these drawbacks, Lybbert and Zolas in a 2014 paper in Research Policy develop an Algorithmic Links with Probabilities (ALP) approach to matching patent technologies and economic industries (using the SITC system). Their ALP approach first uses text-mining techniques applied to patent abstracts and industry descriptions to generate indicators for which industries a particular patent (family) may be associated with. They then uses Bayes rule to compute weights representing the likelihood that a patent belonging to a given technology (based on IPC code) also belongs to a given industry (based on their text-mining analysis). They also are able to generate the contrapositive of these weights: the probability that a patent belonging to a given industry also belongs to a given technology. Lybbert and Zola conclude in their paper that the ALP technique produces matches which are qualitatively similar to the YTC in distribution, with the ALP matching more closely approaching the YTC as more patents are processed in the text-mining exercise. However, Lybbert and Zola also find that their ALP technique is somewhat more prone to producing Type I errors, i.e. suggesting that a patent belongs to a particular industry when it in fact does not. Lybbert and Zola therefore propose a number of ad hoc procedures for reducing the frequency of Type I errors in the ALP technique.

We are able to test the performance of the ALP technique using independent data linking patents to industries (i.e. without text-mining). We use patent-firm matches using the OECD’s Microdatalab, which combines PATSTAT with the Orbis firm-level database. The resulting database contains millions of firms around the world matched to applicants or inventors in PATSTAT and covering a period from 2000 to 2010. In particular, the Orbis database contains firm-level industry classifications (using the NACE system, for which there is a straightforward correspondence with the other systems, including SITC). We thus are able to classify a patent as belonging to a given industry if one of the patent’s applicants is a firm belonging to that industry.

We therefore test the ALP technique using binary probability regression models (e.g. probit or logit) in which the dependent variable is the aforementioned indicator for whether a patent belongs to a given industry, and the explanatory variables are the ALP weights for whether that patent belongs to that industry (which are obtained through looking up the appropriate weights based on the patent’s technology classifications). (Lybbert and Zola provide public data with the ALP weights generated from their procedure.) In our initial testing we are focusing on biotechnologies (as classified using a family of IPC codes designated by the OECD as falling under this heading) and agricultural technologies.

10:30-12:00 Session 13F: Labs and Systems
10:30
Innovation catch-up: role of state-owned enterprises as national agents for innovation development (case of Russia)
SPEAKER: Ivan Danilin

ABSTRACT. It is widely considered that durable and endogenous innovation development is possible and effective only in a well-developed, highly structured institutional framework of National Innovation System (NIS), supportive to creativity, inter-actor communications, entrepreneurship, diversified nature and huge amount of actors, etc. However, in the developing economies with fragmented NIS and with important institutions in transition or even absent, the problem of innovation development appears to be one of the most hard to solve. Unlike industrial catch-up development with its clear goals, well-established instruments (from mobilization of resources and appropriate economic, policy and management solutions and up long-term planning) innovation development with all its ambiguities and intangible goals (i.e. qualitative, numerically unidentifiable) pose a great challenge for government and other policy institutions, as well as to key stakeholders in business sector, academia, and social institutions. One of the most important issues here is the agents of economy’s innovation transformation, since the government on its own can`t support all needed activities - whereas new, innovative actors, who can further drive the change, not always can appear in an non-optimal institutional environment, thus not being a reliable resource for decision-makers. In this situation policy bodies of developing economies assign much of responsibilities for planning, preparation, and execution of innovation policies and strategies (sometimes also institutional) on the state-owned enterprises (SOE) or their closest analogues from affiliated to the government business groups. I.e. it could be stated, that at least in some cases SOEs evolves as national agents for innovation development. This is due to different factors, actual for SOEs: huge amounts of resources available and/or simplified access to national resources, experience (where appropriate) in industrial-era catch-up development, relative controllability by/accountability to national authorities and lower transactional costs of dialogue with public bodies, in many cases possibility of longer-term planning and lesser dependence on short-to-medium-term market cycles, etc. Finally, empirically we know that different public bodies prove to be at least useful in creating pro-innovation agenda, institutions of NIS, and breakthrough innovations. However, obviously there are also tremendous restrictions for using SOEs as innovative agents – starting from their relatively low business ability and ending up with mostly hierarchical, top-down decision-making, which effectively contradicts with traditionally acknowledged (Chesborough, Christensen, etc.) role of decentralized/distributed, down-top approaches and “democratic” nature of innovation development and culture. Considering these contradictions, use of SOEs as innovation agents and associated problems should be acknowledged as an important issue for both theoretical and empirical studies of emerging economies innovation policy and strategies . In proposed paper author will analyze some abovementioned theoretical issues of SOEs as innovation agents, and recent Russian policies (2010-2015) for national innovative development as a case. Russian policies is especially illustrative due to the high rate of public sector in economy (up to 50% of GDP), highly differentiated industrial representation of SOEs (from high- to low-tech) and special federal accent on SOEs as innovation agents in a so-called “enforcement for innovation” strategy. Study represent results of ongoing research project and is based also on empirical data derived from interviews with several Russian energy sector SOEs (chosen for their relative openness and non-defense nature of businesses). Methodology of study is based on applying of Principal-Agent model and System of Systems approach to the research problem. Preliminary results of the study reveals several groups of factors affecting effectiveness and processes of reconstructing SOEs as innovation development Agents. First is monitoring and control issues - considering complex Principal`s costs and vaguely defined goals and targets of national innovation development. Special issue here is a problem of information asymmetry on both Principal and Agent’s sides (also due to not always well-quantified goals and intermediate results (“gates”)). Other is a problem of governmental requirements and policy instruments considering its wider socio-economic obligations and specifics of its dialogue with SOEs as (in Russia`s case) key economic actors.

10:45
How did the industrial research laboratory emerge and how has it changed? Revisiting explanations from business history, economic sociology, and research policy.
SPEAKER: David Pithan

ABSTRACT. The current discourse on “open innovation” argues that the inhouse R&D laboratory has been increasingly displaced in the 1980s and 1990s by innovation networks that draw upon distributed competences and resources and suggests that these flexible research consortia are an adequate answer to the globalized character of the business firms in the late 20th and early 21st century (Chesbrough et al. 2006). However, there is no convincing historical evidence under which circumstances the “open innovation” model came into being. This paper argues that before we can make informed policy recommendations on the future of industrial R&D, we need more historical and sociological insights about its history, in particular how the industrial R&D laboratory became an established and widely copied model for organizing research for industrial and economic growth. For this purpose, the paper revisits literature from business history, economic sociology and research policy. First, the paper shows that the late 19th century saw the establishment of the industrial laboratory as a new, distinct organizational unit of the business corporation, separated from manufacturing and other divisions. General Electric, Du Pont, AT&T and Kodak are generally considered as the forerunners, the first to establish laboratories that were later celebrated widely. Many firms followed suit, which lead to the industrial laboratory becoming an accepted and expected part of the company after World War I.

Second, the paper discusses institutional and societal contexts, particularly in the United States, that led to the adoption of R&D laboratories as a mainstream organizational model, including the patent system and changing antitrust legislation in the political-legal context; federal land-grants leading to considerable expansion of higher education, a strongly growing supply of scientifically-trained graduates and an increasing professionalization of academic disciplines in the scientific-educational context; as well as entrepreneurs and prevalent styles of corporate leadership paradigms in the business context.

The picture emerging from this literature review is one of organizational survival: in a changing legislative environment, companies needed strategies for continued growth and sought to consolidate their markets while fending off competitors. One key strategy was setting up facilities for systematic scientific inquiry that are today considered the prototypes of industrial R&D laboratories. In Chandler's (1959, 1977) framework, establishing in-house research capacities is a structural response to the strategy of integration utilized to stabilize and solidify the growing giants. In addition, Fligstein (1990) points to the importance of changing management paradigms that evolve in the intersecting context of politics and business, shaping the company and approaches to business growth and consolidation in the process.

The paper then discusses, third, shortcomings of current explanations about the emergence of the industrial R&D laboratory in the early 20th century and its subsequent transformation in the late 20th century. Informed by recent advances in institutional theory (Barley & Tolbert 1997; Czarniawska & Joerges 1996), the paper develops a theoretical model that explains both the emergence and institutional transformation of industrial research. The paper shows that successful diffusion of the industrial laboratory in the early and mid 20th century was not only made possible by inventors, scientist-entrepreneurs, and a supportive institutional context, but also by a powerful discourse that presented the laboratory as role model for progress in industry. In this regard, it seems most important to separate contemporary actors' and witnesses' accounts and writings on the topic from ex post explanations, particularly in economic sociology and research policy. Methodically separating contemporary discourse and today’s explanations makes it possible to observe how ideas about industrial research organization traveled and how these ideas were transformed on their journeys over time.

Barley, S.; Tolbert, P. (1997): Institutionalization and Structuration: Studying the Links between Action and Institution. In: Organization Studies 18 (1), p. 93–117. Chandler Jr., A. (1959): The Beginnings of "Big Business" in American Industry. In: Business History Review 33 (01), p. 1–31. Chandler Jr., A. (1977): The Visible Hand. The Managerial Revolution in American Business. Cambridge/London: Cambridge University Press. Chesbrough, H.; Vanhaverbeke, W.; West, J. (Hg.) (2006): Open Innovation: Researching a New Paradigm. Oxford: Oxford University Press. Czarniawska, B.; Joerges, B. (1996): Travels of Ideas. In: Barbara Czarniawska und Guje Sevón (Hg.): Translating Organizational Change. Berlin/New York: de Gruyter, p. 13–48. Fligstein, N. (1990): The Transformation of Corporate Control. Cambridge/London: Harvard University Press.

11:00
Technology Transfer good practices in a developing country: The case of the Technology Development Unit (UDT) in Chile
SPEAKER: unknown

ABSTRACT. Successful cases of university-industry linkages and technology transfer are not common in Latin America and Chile, particular. The case presented in this paper is an exception to this situation focusing on the Technology Development Unit (“Unidad de Desarrollo Tecnológico” or UDT) affiliated with the University of Concepción in Chile.

Chile’s economy has been hailed as an example of success in Latin America for country working its way out of under-development to join the ranks of developed nations. It has grown at a significant pace, bringing new wealth and welfare to its population. However, this success has been built on export-oriented natural resource-based industries and cost-minimizing business models. In more recent times, the Chilean economy has faced lower productivity rates that have resulted in slower growth. Chilean leaders now believe that the next phase of development must be based on a much greater role of science and technology applied in innovative industries.

In order to enter this phase, authorities have reformed the Science, Technology and Innovation (STI) national system by means of a wide range of new public STI programs and the redefinition of the mission of STI public agencies. In that vein, new Research and Development (R&D) organizations have been promoted and funded aiming to increase scientific and economic productivity. However, Chile still has some way to go to reach these goals. Even though scientific productivity has increased, technology transfer remains weak, with limited impact, despite the government’s efforts to promote technology-based new ventures.

The question arises as to what has made UDT and exception in this context. In this paper, we present a case study of this center to capture the components of its strategy and implementation that might explain its technology transfer capabilities in a system that generally does not produce such organizations.

The mission of UDT is to contribute to Chile’s economic development through scientific knowledge and technological innovation in the field of forest biorefineries and become an internationally recognized scientific, technological, and innovation center in areas related to commercially valuable products derived from biomass. Since its creation in 1996, UDT has worked with more than 265 companies. Its portfolio includes 270 publicly funded projects, and it has applied for 31 patents either in Chile or abroad. It has 118 researchers on staff that work in five topical areas: biomaterials, bioenergy, chemical products, environment, and technology management.

The center is located in the Bío Bío Region in Chile. This part of the country has generally been known for its dependence on natural resources and such export-oriented industries as forestry, fishing, agriculture and coal. It is the second region in economic importance in Chile but the economy is not as strong as it was until a couple of decades ago.

The barriers and risks UDT faced were significant. First, despite their collaboration, industry and the university did not wholly trust one another. Industry was dubious of the university’s capacity to provide technological solutions, and researchers were not open to transferring their knowledge to local firms. Second, a shortage of professionals trained in management of technology (MOT), with competencies in writing new R&D proposals and dealing with industry’s technological demands, hindered the success of R&D initiatives. Third, the basic research orientation of the R&D public system had pushed the national government to work on a substantial reform that resulted in a redefinition of the mission and goals of science and technology (S&T) public agencies. A set of new publicly funded R&D programs with this orientation added uncertainty to UDT’s position. These factors are common across the continent and are often cited as the reasons for the lack of fruitful university-industry linkages.

However, from the case study, we conclude that several factors supported UDT’s success. The first was the organization’s high-quality leadership. The executive director took over the unit after gaining professional experience in Germany, where he learned the management of applied R&D units, interacting with industry, and technology transfer practices. Second, the organization trained its staff in MOT. Researchers who were regularly in proposal preparation mode to fund UDT’s operation ended up becoming experts in approaching industry and delivering high-quality projects. Third, the university provided needed organizational support. As a part of the university, UDT regularly collaborated with its highly productive researchers, giving the unit a competitive position within the national R&D system. Fourth, in the early 1990s, the national government undertook a reform of the R&D public system that resulted in several new programs in line with UDT’s university–industry philosophy.

11:15
China’s State Key Labs: Building Blocks of China’s Innovation Ecosystem
SPEAKER: unknown

ABSTRACT. China’s state key laboratories (SKL) program emerged thirty years ago in 1984 as one of the first science and technology initiatives in China’s period of economic reform and opening up. What began as a small effort to build innovation capacity in China has grown into a vast network of labs and research centers enveloping scores of high-tech sectors and new and emerging fields. Today, the program has grown to include 289 national-level laboratories in 44 cities and many more institutions. The research performed at these labs forms the core of China’s basic research enterprise and feeds into China’s efforts to build a world-class science and technology capability. However, little to zero research has explored this crucial part of China national innovation system (NIS). This paper first outlines the nature of China’s SKLs by mapping their growth, research diversity, and organizational structure. This is done using a  unique dataset compiled to include all of China’s SKLs and associated characteristics. Next, the paper seeks to understand the influence and role of the SKLs within China’s NIS. This is performed first by mapping the labs’ relationship to other components of China’s NIS to observe connections, collaborations, and lines of bureaucratic responsibility. Policy statements concerning the SKLs, particularly concerning their role in state S&T plans, are also explored.

13:30-15:00 Session 15A: Science and the Public
13:30
From the Cosmos to the Legislative Chambers – how scientists can inform state policy
SPEAKER: unknown

ABSTRACT. This presentation relates to the overall theme of the conference by discussing how information is used for societal benefit and highlights a unique and interdisciplinary approach to integrating science and policy at the state level. This presentation highlights a growing career opportunity for scientists who are exploring careers beyond academia. Science and technology are inextricably interwoven into policy decisions that affect the fabric of society. Whether the issue is complex, such as economic growth, public health, and environmental sustainability, or simple, such as the choice of technology by which we register our votes in an election, policy decisions often have important scientific and technical components. Federal policymakers have a variety of sources of objective, credible and relevant scientific and technical advice, such as the National Academies and AAAS. State decision makers–Governors and legislatures alike–have few if any analogous organizations. The history of efforts to provide S&T advice to State policymakers is largely one of ad hoc and temporary efforts. Yet States are increasingly playing the role of policy laboratories, in e.g., education, health care, technology innovation, energy and environmental policy. In turn, having scientists and engineers with real world knowledge of what issues policy makers face is equally important. This presentation will explore existing and new efforts to bridge the cultural divide between State policymakers and the scientific and technical communities, focused on the effort in California. The State of California has one of the largest economies in the world, driven largely by science, technology, and innovation. Yet among elected and appointed officials in our State government – and including their staff – there are woefully few with deep backgrounds in science and technology. In 2009 the Legislature passed Assembly Bill 573 (Assemblymember Anthony Portantino, D-Pasadena) that created what has become a highly regarded program, the California Council on Science and Technology (CCST) Science and Technology (S&T) Policy Fellowship, which integrates PhD scientists and engineers into the legislative policymaking process. The intent is for lawmakers to have access to impartial experts in science who serve as a trusted member of the legislative team to help inform legislation that will successfully support the state’s long-term policy goals and benefit its residents. CCST carefully selects ten PhD-level scientists to serve in a year-long Fellowship in the California Legislature, where they become integral to the legislative teams that address complex issues and bring their superb research, analytical and problem-solving skills – the “scientific method” – to science and non-science issues. They truly become part of the legislative process and, in the five years the program has been underway, they have had a major impact on policy in California. The program is designed to enable Fellows to work hands-on with policymakers in addressing complex scientific issues as well as assume all the other legislative responsibilities of full-time legislative staffers. The Legislature benefits from having the expertise of a trained PhD-level scientist who brings significant analytical, problem solving, research, and communication skills applied through the lens of the scientific method. The Fellows gain an invaluable, hands-on learning experience about the intersection of science, technology, and policy. To support the Fellows’ transition from academia to the policy world, CCST offers an intensive, month-long training program as well as professional development throughout the year. After five years, more than half of the nearly 50 Fellows have chosen policy careers such as state and federal government and nonprofit organizations, and others have taken their understanding of policymaking to other professional careers including returning to academia, industry or federal laboratories. CCST leadership and program staff have learned a great deal during the first five years of the program and have continually adjusted management approaches and program design. Lessons learned include: the program must have support from policy leadership; linking science with policy is new to most funders; Fellow selection is key; training Fellows is critical; and outcome measurement is challenging. It is clear that the Fellowship program has succeeded in bringing strong science and technology advice to the policymaking process, and, as evidenced by the number of former Fellows who have transitioned from scientific research into careers in government and non-profit organizations, it has provided lasting opportunities to blend science and technology with public policy.

13:45
Do it quickly and not worse – a study on the crowdsourcing governance structure for quality control
SPEAKER: Yixin Dai

ABSTRACT. Crowdsourcing emerges as an innovative business model in early 21st century. Gathering large amount of online voluntaries to fulfill working tasks, crowdsourcing concept, for the first time in history, includes anonymous crowds into the production process. It has been widely adopted in different application areas in information gathering (iSpotnature.org), art design (iStockPhoto.com), innovation (InnoCentive.com), human computing (Amazon Mechanical Turk), and translation (Yeeyan.org).

The open and voluntary setting in crowdsourcing seem to exclude hierarchical control that enable the efficient and effective implementation in traditional close gate innovation in exchange for large amount of online contribution. Without these tools, how best to ensure voluntary production quality becomes essential in crowdsourcing management both at theoretical level and at practical level.

Existing literature confirm the connections between participation motivation and production quality, indicating that self-motivated participation would imply high quality output. It is worth noting that this argument has pre-assumptions of large amount of free-lancers supply and little time limitation. When these two pre-assumptions are challenged as crowdsourcing model being applied to timely production areas (i.e. translation, product design, online diagnosis etc.), it takes well-designed governance structure to ensure crowdsourcing quantity as well as quality control. With this regard, this paper fills the gap by answering questions of: is hierarchical structure necessary in crowdsourcing governance? how does this governance structure affect crowdsourcing quality? And how does this governance structure affect participation motivation?

To make persuasive comparison, this paper observes the largest Chinese translation community and the crowdsourcing tasks with/without quality control governance structure. Particularly, this research differentiates influence from ex-ante and ex-post control. Three types of crowdsourcing tasks are observed in this research: tasks with ex-ante control (book translation tasks with requirements on translation time and quality); task with ex-post control (competitive translation tasks); and task without control (daily translation tasks openly published on the website). Within all three type of tasks, data are collected from three relevant groups: managers (company employees), translators (freelancers), and reviewers (both company employees and freelancers).

Through 11 in-depth interviews and over 200 surveys, this research studies 8 different projects conducted in Yeeyan.org in 2013 and finds: (1) Crowdsourcing quality is better in governance structure with hierarchical control. A two-layered community-based structure is adopted in this case to build the balance between community flexibility and feedback control. This structure is echoed by other successful crowdsourcing practices in which community leaders and network hubs become successful project leaders to ensure the quality control. (2) Different quality control methods function differently. Similar to the governance structure in the open source community, ex-ante control ensures qualified voluntaries work together following well designed schedule and steps to finish the task as a team. It influences translator selection and project leader selection within the community. Ex-post control functions like public voting and feed back system, in which learning needs and spiritual kudos are met to encourage further and better participation. Specifically, it could generate intimacy among participants, which inspire higher quality of collaborations in the long run. Two control methods could be adopted separately or jointly.

14:00
Participation dynamics in crowd-based knowledge production: The scope and sustainability of interest-based motivation
SPEAKER: unknown

ABSTRACT. Crowd-based production of scientific knowledge is attracting growing attention from scholars and policy makers. One key premise is that participants who have an intrinsic “interest” in a topic or activity are willing to expend effort at lower pay than in traditional employment relationships. However, it is not clear how strong and sustainable interest is as a source of motivation in crowd-based knowledge production. We draw on research in psychology to discuss important static and dynamic features of interest and derive a number of research questions regarding interest-based effort in crowd-based projects. Among others, we consider the specific versus general nature of interest, highlight the potential role of matching between projects and individuals, and distinguish the intensity of interest at a point in time from the development and sustainability of interest over time. We then examine users’ participation patterns within and across 7 different crowd science projects that are hosted on a shared platform, Zooniverse. The data set includes information on the daily activities of over 100,000 volunteers, resulting in over 32 million person-day observations. A first set of analyses examines the scope of interest-based motivation. These analyses build on prior research suggesting that interest should be conceptualized as the relationship between a person and a particular object (e.g., task, project, topic), rather than as a general trait of the person or a general characteristic of the object. Consistent with the notion that interest is quite specific and that many project-person pairs fail to result in a match, we find that most members of the installed base of users on the platform do not sign up for multiple projects, and most of those who try out a project do not return. Even those individuals who participate in multiple projects appear more likely to choose projects in the same scientific field rather than in different fields. Thus, our results suggest that interest-based motivation tends to be quite specific. At the same time, some individuals appear to have an interest that generalizes across topics and fields. Interestingly, controlling for the general time trend, contributors who start with one project and subsequently enter new ones increase their overall level of effort on the platform, although we also observe some crowding-out of effort in the first project. Building on the notion that a given person’s interest in a particular object can develop and change over time, a second set of analyses examined the sustainability of interest. This dynamic analysis shows that interest declines rapidly, with a large majority of the participants who returned to a project (and thus were likely an initial match) dropping out within a few weeks. However, we also observe some contributors whose activity increases over time, especially when we analyze activity at the level of the platform rather than individual projects, thus taking into account switching into additional projects. Individual-level heterogeneity in both initial levels of participation and in the dynamics over time translates into a highly skewed distribution of contributions, with a small share of contributors driving most of the output of projects. Overall, it appears that interest can be a powerful motivator of individuals’ contributions to crowd-based knowledge production, as evidenced by thousands of hours of effort invested in the projects we studied. However, both the scope and the sustainability of this interest appear to be rather limited for the large majority of contributors, with many participating only in a single project and only for a few days. At the same time, some individuals show a strong and more enduring interest to participate both within and across projects, and these contributors are ultimately responsible for much of what crowd science projects are able to accomplish. We discuss implications for crowd science organizers as well as policy makers. In addition, we consider how insights from the setting of crowd science may inform our understanding of ongoing changes in the area of “traditional” science, including increasing team size, increasing openness, and a growing role of internet-enabled collaboration.

14:15
PARTICPATORY TECHNOLOGY ASSESSMENT: PUBLIC VALUES AND RATIONALES ON NASA’S ASTEROID INITIATIVE
SPEAKER: unknown

ABSTRACT. Public support and interest are needed to design an ambitious human spaceflight program. “Taking the public along for the ride” is crucial for the future of space exploration, as emphasized by the Space Studies Board workshop (Smith, 2011). However, sometimes it is difficult to understand what society values, and it is even more difficult to consider the public prior to actually developing a mission. Participatory technology assessment (PTA) is a methodology aiming at understanding public perspective and values to inform government decision-making (Sclove 2010). PTA can particularly play a role of informing technical decision-making in the early stages of preliminary design. In partnership with NASA, the Expert and Citizen Assessment of Science and Technology (ECAST) network conducted a PTA-based forum of NASA’s Asteroid Initiative.

Two citizen forums were organized by ECAST in Phoenix, Arizona and Boston, Massachusetts on November 2014, with a total of 183 citizens, selected by ECAST to minimize self-selection biases, such as enrolling too many space enthusiasts. Citizens were briefed during ECAST informational content sessions on NASA’s asteroid initiative and Mars mission, and had experts answering the public’s questions via text responses. The goal of the forum was to assess citizen’s values and their preferences on NASA’s future mission and technology choices. The participants had structured discussions that were enabled by a facilitator, with NASA personnel not being allowed to interfere with the discussion. This paper analyzes the citizen discussions and deliberations about the two following topics that they had to discuss: • Asteroid Detection: Are citizens satisfied with existing asteroid detection approaches? Whom do they see as being best capable of leading detection efforts? • Planetary Defense: After learning about asteroid mitigation approaches put forward a series of scenarios where asteroids have various percentage likelihoods of hitting the Earth. What levels of risk do people find unacceptable? How ready do they want planetary defense capabilities to be in case we hear about an imminent threat? This paper will analyze the results from the forum deliberation, including a discussion of values and perceptions the public had about asteroids. Analysis presented will include thematic coding of results, timeline analysis of how the public reasoned about different levels of threats, as well as an assessment of how opportunity costs were perceived. We will conclude with insights on how future PTA should be implemented in terms of scope, topic and participants.

Citations:

Smith, M., Sharing the Adventure with the Public: The Value and Excitement of “Grand Questions” of Space Science and Exploration, Summary of a Workshop, National Research Council of the National Academies, 2011

Sclove, Richard. 2010. “Reinventing Technology Assessment: A 21st Century Model.” Published by the Woodrow Wilson International Center for Scholars: Science and Technology Innovation Program. Washington, DC. Available at: [http://www.loka.org/documents/reinventingtechnologyassessment1.pdf]

13:30-15:00 Session 15B: Career Age and Stage in Science
13:30
Emerging Scholars: An Analysis of Strategic S&E Departments in Research Funding Attainment
SPEAKER: unknown

ABSTRACT. In 2012 alone, over $63 billion was invested in US university research and development (R&D) in the fields of science and engineering (S&E). As engines for basic research, universities train the next generation of the S&E workforce. We focus on this population of emerging scholars during their graduate training and examine the effect of academic department conditions on promoting success among early-stage, emerging scholars. We define success by drawing upon data from one of the most prestigious research programs designed for emerging scholars in the US, the National Science Foundation’s (NSF) Graduate Research Fellowship Program (GRFP). This program has a demonstrated history of supporting promising graduate students in NSF-supported S&E disciplines. Applicants are subject to the standard NSF proposal review process, and award recipients receive a competitive three-year fellowship to conduct their own research. As a signal of quality, both GRFP recipients and honorable mentions are publicized. We draw upon this data not only to identify promising emerging scholars, but also to exploit the variation between recipients and honorable mentions.

In this paper we focus on 51 S&E fields from 212 US academic institutions. Our data include the NSF GRFP list of award recipients and honorable mentions and data presented as part of the National Research Council’s (NRC) Data-Based Assessment of Research-Doctorate Programs in the United States. The most recent survey (2005-2006), published in 2010, includes data on a series of measures regarding faculty composition and productivity, characteristics of graduate students, and characteristics of the department. We pay particular attention to a series of peer- and leadership-based network effects and a series of organizational diversity and support effects.

The first component of the analysis draws upon the entire sample of NRC S&E departments to estimate the differential effect on having any GRFP success – defined as a unique university-department having at least one student with an award or honorable mention – between 2005 and 2008. Of the 3,084 departments in the NRC sample, 965 had some GRFP activity during this time while 2,119 did not. This binary model is estimated as a logit, probit, and linear probability model, each with and without fixed effects. The second step of analysis narrows the sample to departments that have graduate students with any GRFP success from 2005-2008. We examine how departmental factors vary on the margin between those departments with any GRFP activity that contained only students who received honorable mentions (38%) and those with at least one student who receive an award (62%). We estimate this model as a logit with fixed effects. Finally, we then examine only those departments with at least one award-winning student to investigate what factors are associated with higher counts of awards during this time. In this sample the mean number of GRFP awards is roughly three with a maximum of 43. We test both a Poisson and negative binomial distribution. For each step of the analysis we also stratify our sample by department type into four groups: engineering, life sciences, math and physical sciences, and social and behavioral sciences.

            For the first component of the analysis, program size and the average number of publications per faculty have the largest positive effect on any GRFP success such that being in a larger program by quartile is associated with approximately a seven percentage point increase in the probability of GRFP success and a one-unit increase in the faculty publication average is associated with an increase of eight percentage points, on average. Having student workspace is also a powerful indicator and is associated with an increase of roughly five percentage points in the probability of a department containing a student who receives a GRFP award or honorable mention. When stratified by field, the social and behavioral sciences and engineering programs are driving the benefit of larger program size. Meanwhile, the effect from average faculty publications is strongest for life sciences and social and behavioral sciences. The presence of student workspace is only a significant indicator for those in engineering and math and physical science fields. 

            When examining the subset of those departments containing at least some students with demonstrated success – in reference to the second component of the analysis – program size and average faculty publications are again driving indicators of success. However, offering prizes or awards to students for teaching or research and having regular graduate program meetings are negatively associated with students in the department obtaining GRFP awards. Lastly, when examining what factors influence the count of awards within a program, median time to degree and student proposal support also emerge as positive indicators, while prizes become a positive indicator.

Overall, program size and average faculty publications are consistent and positive indicators of GRFP success. These analyses examine what components of graduate training impact receipt of a prestigious dissertation award, which proxies for research promise. Understanding the educational environment is not only key to promoting better training of our graduate workforce, but also offers insights into the potential causes leading to early-stage innovative success.

 

 

 

13:45
The effects of scientific age, oversea experience in research productivity? An analysis of Distinguished Young Scholars in China
SPEAKER: Gangbo Wang

ABSTRACT. The Distinguished Young Scholars programme is a funding instrument of National Natural Science Foundation of China, which provides career development support and funds scholars below the age of 45. It has become one of the most important funding sources for young scientists in China. A sample of China-based research scientists working in chemistry and receiving the Distinguished Young Scholar took part in the study. Employing publication data from Web of Science database and CV data from websites, this paper studies the differences in research performance of these scholars before and after receiving the DYS fund. It also examines how the factors, including overseas experiences, academic age and working mobility, influence research performance of these scholars. This analysis provides some interesting findings. The first highlights that after getting the DYS fund, the scholar’s research productivity becomes more outstanding. The second presents that scientific output of these scholars peak in the age of 38-40, and research productivity of the scholars with oversea experiences are higher than those without oversea experiences. Finally, it derives policy implications.

14:00
Age and the Trying Out of New Ideas
SPEAKER: unknown

ABSTRACT. Older scientists are often seen as less open to new ideas than younger scientists. We put this assertion to an empirical test. Using a measure of new ideas derived from the text of nearly all biomedical scientific articles published since 1946, we compare the tendency of younger and older researchers to try out new ideas in their work. We find that papers published in biomedicine by younger researchers are more likely to build on new ideas. Collaboration with a more experienced researcher matters as well. Papers with a young first author and a more experienced last author are more likely to try out newer ideas than papers published by other team configurations. Given the crucial role that the trying out of new ideas plays in the advancement of science, our results buttress the importance of funding scientific work by young researchers.

Full paper available at: www.nber.org/papers/w20920

13:30-15:00 Session 15C: International Collaboration
13:30
Evolutionary convergent patterns of international scientific collaboration
SPEAKER: unknown

ABSTRACT. International scientific collaboration has received much attention by scholars since it is a main feature of scientific communities across different research fields. Research collaboration can take place at different levels: individual researchers, research teams/labs, departments, universities, sectors and nations. In economics of science, it is crucial to analyze the collaborative pattern of scientific fields in order to understand their vital characteristics and evolutionary dynamics. Frame and Carpenter (1979) have analyzed, considering 1973 data, the international collaborative patterns of some scientific fields. Starting from this pioneering work, the purpose of this paper is to investigate the international co-authorship of research institutions using new data (1997-2012 period) and to compare the results with earlier studies to detect main characteristics (regularities) concerning the basic structure and evolutionary dynamics of different scientific fields over time. This study focuses on institutional collaborations in scientific fields based on article counts from the set of journals covered by the Science Citation Index and Social Sciences Citation Index in the data set by National Science Foundation (2014). The Published articles in all scientific fields are classified by co-authorship attribute (total articles with domestic institutions only, total articles with international institutions). These international co-authored papers across scientific fields are analyzed considering a sample of forty countries that accounted for 97% of the worldwide total output in the studied period. This study also considers a sub-set of 11 Western countries in order to provide results comparable with the study by Frame and Carpenter (1979). The methodology computes per scientific field i the total intensity of international co-authorship papers (ICPit ) during the period 1997 - 2012. The results of this study are compared with earlier studies by Frame and Carpenter (1979) and Luukkonen et al. (1992). To put in a comparable framework, all the values ICPit per research field i in some key years t, are standardized. This study provides insights on the main characteristics of the evolutionary process of research fields by international research collaborations. In particular, empirical analysis supports two vital findings, given by:

a) although the unparalleled growing intensity of international collaborations in different scientific disciplines, the general structure of evolutionary pathways of research fields seems to be unchanged, so far; b) a convergent process of theoretical and applied research by new pathways of international research collaborations. This convergent process can be driven by two main simultaneous forces: • High growth rates of international research collaborations in some research fields such as medicine, geosciences, psychology, and biological sciences (mainly applied sciences) • Low growth rates of international collaborations in other research fields, mainly theoretical ones such as mathematics, chemistry and physics.

Potential determinants of this evolution of scientific patterns, based on different rates of growth of international research collaborations between applied and theoretical sciences, can be the increasing interdisciplinarity of current research fields and very strong impact of emerging disciplines (e.g. such as nanoscience, nanotechnology, biotechnology, cognitive science, computational biology, bimolecular physics, bioengineering, etc.). This ongoing interdisciplinarity of both emerging and traditional scientific fields, associated to technological advances, tends to induce a convergent process between basic and theoretical research fields considering their intensities of international research collaborations. The results of this study show that the evolution of applied and theoretical sciences is changing by a convergence of intensities of international research collaborations across different research fields that may due to a strategic change to take advantage of important opportunities of interdisciplinary approaches for the solution of more and more complex problems necessary to modern societies and economies.

13:45
How to measure the level of S & T International Cooperation
SPEAKER: Jinwon Kang

ABSTRACT. As the importance of globalization is increasing, the measurement of globalization level of S&T is useful for the establishment of science, technology and innovation policy. There are many indicators showing the level of globalization such as sources of R&D funding from abroad, the number of triadic patent families, international co-operation in science, and technology balance of payments, etc. But there are few indexes to integrate the diverse indicators related to the S&T globalization. The objective of this research is suggesting the globalization index to show the extent of S&T globalization in the national level. This research is going to suggest the way how to measure the extent of S&T globalization regarding the national innovation system.

In this paper, the scope of globalization is restricted to S&T international cooperation and covers resources, networking, performance and related infrastructure. Therefore, globalization indicators can be classified into the mobility of R&D resources (capital and HR), R&D cooperation networking, R&D cooperation performance and R&D cooperation conditions.

The indicator-frame gives insights to find optimal level for the globalization of national innovation systems. The main indicators are selected regarding comprehensiveness, utilization and international comparability. In the case of mobility of capital, this is narrowed down to the R&D over FDI. The mobility of HR contains Student mobility inbound and outbound (per 1000 habitants) and Foreign recipients of US S&E doctorates by country. The number of doctorate holder is supplemented for worker mobility due to the absence of data. Networking can be measured the result of joint activities in terms of Ratio of Co-patenting according to inventors' country of residence and Ratio of international collaboration on S&E articles, by selected country. Performance part is classified into paper, patent and technology trade. Cooperation conditions consisted of attractiveness and share of foreign enterprises. The living condition and research environment should be regarded through the improvement of measurability of related data in the future. Before combine indicators to form index, the relation between globalization and NIS will be investigated to facilitate finding optimal level of globalization. There are some discussion about the relation between the globalization and the national innovation system. Normalizing and weighting indicators are followed by drawing globalization index. The normalized values can be obtained as the comparative ratios over indicators' maximum value. The normalized values of Korea indicates that the levels of globalization are generally low and indicators of FDI and technology flows are worst level in comparison with those of the best countries.

In order to draw globalization index, the indicators are combined regarding weighting factors. In this process, weighting factors will be fitted so as to maximize the national innovation capacity. This index can give information about optimal level of globalization in terms of the mobility of resources, networking, performance and conditions. This research was focused on the building-up of the indicator-frame and selection of main indicators. The main indicators were selected and normalized and combined with weighting factors regarding national innovation capacity. This index gives policy makers insight for establishment of STI policy by providing the proper level of S&T globalization and its comparison with other countries.

14:00
Spatial dynamics of international collaboration in science
SPEAKER: unknown

ABSTRACT. Scientific collaboration has been widely acknowledged to be efficient in managing time and labour in research labs (Coccia, 2014; Solla Price and Beaver, 1966), to improve research quality (Presser, 1980; Narin et al., 1991; Katz and Hicks, 1997) and spur the breakthroughs of scientific research for supporting competitiveness (Coccia, 2012). Along with the increase of international scientific collaborations, a better understanding of the structure and evolutionary pattern of the global research network across geo-economic areas are needed for scholars and policy makers.

The high heterogeneity across countries – in terms of size, scientific capacity of the national system of innovation, etc. – generates a variety of patterns of the international research collaboration (Melin, 1999; Narin, et al., 1991; Ozcan and Islam, 2014). A main issue in economics of science is to determine how and to which extent countries are engaged in international research collaborations in order to understand the behaviour of knowledge flows and to design research policies for improving the scientific research production, which enhances national competitiveness of sectors.

The purpose of this research is to investigate the evolutionary pattern of international research collaborations across countries. Special emphasis is placed on two complementary collaboration typologies: intra- and inter-collaborations. The former refers to research collaborations conducted by countries within their geographical area (e.g. countries within European area); the latter refers to research collaborations engaged by countries with different geographical areas (e.g. a European country with an Asian one). Higher intra-collaborations of countries indicate that scientific cooperation is more and more bounded within certain geographical territories, while higher inter-collaborations signals the fade of geographical limit.

The main research questions of this paper are: • How does the distribution of international collaborations across countries evolve over time? • What type of research collaboration (inter- or intra-) plays a more important role in re-shaping the global collaborative scientific network across geo-economic areas?

The data of this study are collected from publications in academic journals covered by the Science Citation Index and Social Sciences Citation Index. In particular, this study refers to a dataset by the National Science Foundation (2014)-National Center for Science and Engineering Statistics, special tabulations from Thomson Reuters (2013), SCI and SSCI. Collaboration data cover two years 1997 and 2012 and 40 countries. These 40 countries produce about 97% of the global total articles over 1997-2012. The 40 countries are classified into eight geographical areas: North America, South America, Europe Union, Other Europe, Middle East, Africa, Asia and Australia/Oceania.

The method of research is based on three main steps: Firstly, to analyze the worldwide distribution of international collaboration, this study uses the Lorenz curve and Gini coefficient. Secondly, to map the research connections between countries, both absolute collaboration output (number of articles) and collaboration intensity (similarities) are considered. This study applies Salton and Jaccard indexes that are reliable metrics of collaboration intensity. Thirdly, from a dynamic perspective, this study applies network analysis to explore the structure of international collaborations and its changes from 1997 to 2012. In particular, this study focuses on intra- and inter-scientific ties across countries within the global research network. Fourthly, the spatial pattern, in particular correlation between collaboration and spatial distance, is further examined by Mantel test and Mantel correlograms.

The main lessons learned of this research can be synthesized as follows. First, the distribution among the under studied 40 countries is more and more balanced. Nevertheless, it is worthwhile to note that the distribution of total publications is more divergent than internationally co-authored papers. Second, in the process of evolution of international research collaborations, results show a significant difference between intra- and inter- collaborations. In all geographical areas, except European Union, the intra-collaboration interrelationships exhibited a steady-state pattern, whereas scientific inter-collaborations in the global network research structure have risen dramatically. Third, from a dynamic point of view, the comparison of 1997 and 2012 research networks shows that inter-collaborations (between countries belonging to different geographical areas) have grown significantly in the later years, whereas the scientific connection strength between major intra-collaborative partners stayed mostly unchanged. Finally, our result shows that the change of international collaboration structure is not driven by proximity. The correlation between changes of collaboration intensity and proximity, if there is any, exists in certain distance classes.

 

14:15
Bilateral and multilateral coauthorship and citation impact: intenrational collaboration in the Fourth Age of research
SPEAKER: unknown

ABSTRACT. Contrary to many assumptions, international research collaboration at both national and institutional level is mostly with only one or two partners, usually among other research-intensive economies and organisations. For the USA and UK, we show that highly multinational research is growing but remains very scarce (<1% total output). The ‘citation bonus’ that international collaboration contributes is specific, limited and needs to be interpreted with some care. This analysis does not contradict prior work on the emerging Fourth Age of Internationally collaborative research but adds nuances to interpretation regarding the ways international partners do or do not add citation impact to collaborative work, and the limits to such additionality. Gains look different for the same country when seen in two-country and multi-country collaborations. While benefit does increase with partner number, it plateaus with seven co-authoring countries. This has implications for policy and management. We also suggest that highly multinational papers are best excluded from routine citation analysis, which they distort. The four most prolific research-publishing countries are the US, China, UK and Germany. Since these four published 5.4 million papers among 10.7 million papers globally indexed on Thomson Reuters Web of Science during 2002-2011 (i.e. around half the recent global total), it is reasonable to assume that their collaborative publication activity reflects, or even dominates, the overall global network. We analysed the relative numbers of purely bilateral, trilateral and quadrilateral papers for this leading group. Note that the UK is evidently more internationally collaborative than the US. But, as has been observed, collaboration between Los Angeles and Cambridge, Massachusetts involves a greater geographical distance (around 3,000 miles) than between Cambridge, England and Istanbul (less than 2,000 miles). These data do not inform us directly about any cultural predisposition to collaborate. The summary matrix of total and bilateral collaboration between the five countries confirms that the USA is the most frequent international partner for others. Beyond the G7, the US partners increasingly often with China with which its 2011 annual total of papers was five times higher than in 2002. In 2010, for the first time, the US published more papers with a China co-author than with a UK co-author. Bilateral collaboration is rising, as studies elsewhere demonstrate. The deconstructed 2002-2011 data for trilateral and quadrilateral collaborative papers provides a balancing illustration, however, that reflects the diverse underlying spread of trilateral papers and the rarity of papers with four or more countries. For example, almost 14% of UK papers have a US co-author and 7% have a German co-author but less than 2% have co-authors from both. Trilateral papers with either of these and China make up less than 0.5%. Quadrilateral papers with these, the largest research economies, are 0.2% of UK output. UK bilateral papers with the US have a clear citation gain over France and Germany, but UK-China papers remain at or around world average citation impact (1.0). However, if there is any citation gain from year to year with single partners (which might be inferred for France and Germany), then it is functionally small: the average impact of UK-Germany and UK-France papers is little different to that of the overall UK average (around 1.45 compared to world average at 1.0). It is among the relatively small number of papers with many partners that a pattern emerges that may explain both the effect of multi-national co-authorship and its limitations. It becomes evident that for each additional partner country up to six (i.e. a total of 7 collaborating partners) there is a gain in average normalised citation impact, until the average reaches 4.0. Papers with 8-20 co-authoring countries show no further gain. UK papers that have 20 or more collaborating countries have, if treated as a single group, a higher average citation impact but there is no statistical relationship between impact and authorship: the pattern is chaotic. Some papers with 20-30 partner countries have citation impact in excess of 20 times world average while some with 40 collaborating countries have far less. This analysis has convinced us that papers with more than 20 collaborating countries should be regarded as a different category or mode of publication. They are very few in number, quite concentrated in disciplinary diversity but sometimes have an exceptional citation impact. Their scarcity makes them difficult to analyse further since they are the outliers in any dataset but if they are part of a sample then they may have a strongly skewing effect on any indicators based on average citation counts or average citation impact.

13:30-15:00 Session 15D: Evaluation of Broader Impacts
13:30
Altmetrics: Tools for Measuring Impact or Enabling Serendipity?

ABSTRACT. This paper addresses the issue of developing metrics for assessing the broader societal impacts of research. Key questions include whether quantitative metrics or peer review are better tools for measuring broader impacts; whether scholarly impact is distinct from broader societal impact, and whether each needs its own assessment tool; whether altmetrics are currently designed to measure broader societal impacts; and whether altmetrics might be designed to encourage serendipity.

The paper begins with a consideration of the approach to applied ethics known as ‘Principlism’. Principlism was developed by Beauchamp and Childress specifically in order to provide a framework for decision making in a bioethical context. Importantly, perhaps, Principlism was not developed as an overarching ethical theory; it was designed with a particular use in mind. Beauchamp and Childress designated four principles for bioethics:

1. Autonomy 2. Non-maleficence 3. Beneficence 4. Justice

The principles were meant to provide a framework, based in common morality, to allow us to justify more specific policies and rules for biomedical human-subjects research.

Principlism was subject to two main, yet roughly opposite, criticisms: adherents of Impartial Rule Theory argued that the principles were insufficient to provide enough guidance to decision making; adherents of Casuistry argued that beginning with principles was akin to imposing a prejudgment on the specific case. The difference between the criticisms of the Casuists and the Impartial Rule Theorists really boils down to how determinate judgment ought to be. For the Impartial Rule Theorists, we need to specify principles to the level of rules, at least, in order to guide decision making. For the Casuists, that would overdetermine our judgment of cases, which really ought to rest on reflective judgment (rule-seeking rather than rule-following). Put differently, the Impartial Rule Theorists claim that principles underdetermine our judgments and the Casuists claim that principles overdetermine our judgments.

There are two main ‘mechanisms’ for establishing impact: 1) peer review and 2) quantitative metrics. Critics of peer review argue that it underdetermines our judgments of impact – it’s too subjective, biased, and not well-suited to judge impact (no experts, hence no peers). Critics of metrics argue that they overdetermine our judgments of impact – they’re too ‘objective’, lack nuance, and not well-suited to judge impact (they sneak in judgments, which are black-boxed and made available to use by those who lack judgment). This polarizes the debate about how to judge impact and increases our focus on the means, rather than the ends, of impact. That’s a mistake. Some, e.g., Pielke and Byerly (1998), argue that peer review should be used to judge scholarly impact and metrics should be used to judge broader impacts. Some in the altmetrics community have run with this idea (though they might favor bibliometrics for judging scholarly impact and ditching peer review altogether).

The peer review vs. metrics and scholarly vs. broader impacts groups tend to see two totally different kinds of impact: scholarly and broader. Another view is that, although there may be a difference in audience, scholarly works can have broader impacts; and ‘broader impacts’ activities, such as education and outreach (broadly construed), can inform research.

To resolve these disputes, I argue that we should focus on the purpose of judging impact. This is where serendipity comes in. If one sees peer review as a guardian of academic autonomy from society (since accountability is determined relative to our peers), one might be tempted to add a demand for societal metrics (as do Pielke and Byerly) to balance out autonomy with accountability to society. This model works if we see two different types of impact. But what if we see one sort of impact? I think that the idea of serendipity (finding something useful when looking for something else) might allow us to design tools to produce impact – high quality research that also has an impact on audiences beyond our disciplinary peers.

References Pielke, R., & Byerly, R. (1998). Beyond basic and applied. Physics Today, 51(2), 42-6.

13:45
Mechanisms of Societal Impact of Publicly Funded Research: Models and Evidence
SPEAKER: Juan Rogers

ABSTRACT. Motivation Public funding for scientific research is now in an “era of accountability.” Government budgets for research that is not directly oriented to specific government missions are either not growing or even shrinking. Competition for scarce resources is fierce and scrutiny of proposed research for its added value beyond the potential contribution to the stock of knowledge is growing in intensity. Funding agencies have institutionalized the criterion of “broader impacts,” that is, benefits of the proposed research of a societal nature beyond advancement of the field and education of future researchers. The increased scrutiny of proposed research for its potential societal benefits is not accompanied by much greater understanding of mechanisms by which research produced in an academic setting might lead to them. The implicit assumptions, in spite of frequent, and often dismissive, protestations to the contrary, are analogous to the maligned “linear model of innovation.” Without an assumption that knowledge produced in research contains all of the capacity to produce said benefits in itself, it is not rational to require that proposals predict them. Having said this, though, it is not productive to reject the expectation that publicly funded research somehow leads to societal benefits. A fundamental pillar of the legitimacy of funding academic research with public funds rests on this expectation.

Analysis This paper will review the literature to find and classify the available models of mechanisms by which research is deemed to produce societal benefits, or “broader impacts,” beyond the scholarly community. The review will include the scholarly literature to determine the state-of-the-art on mechanisms of research impact. This literature is expected to contain several approaches to the phenomena by which research impact occurs. The review will also include key documents in the “grey” literature, such as agency reports, special committee reports and position documents that propose measures to enhance the realization of value from research. Key exponents of this literature are available from the funding agencies and government document repositories. The objectives and standards for these publications are different from scholarly articles. However, their influence is very significant since they often inform directly the rules that govern the submission and review process for research proposals. The main categories by which these approaches and models may be classified will be determined and basic typologies of models will be developed. One such category distinguishes explicit and implicit models. Explicit models are those articulated on the basis of analysis of evidence or offered as a hypothesis on the mechanisms by which research may lead to observable impacts. Implicit models are those found in expressions about impact that rely on assumptions about the means by which such impact might have occurred but are not articulated on the basis of reported evidence. Both sorts of models are important since the former are the result of existing scholarship on the matter and the latter are inherent in widely held beliefs about how impact from research happens. The main assumptions leading to implicit models will be identified and classified. A second criterion will be whether models are derived from systematically gathered and analyzed empirical knowledge or not. This is a critical criterion to assess the contents of articles and reports. The discourse on research impacts is vast and diverse but much of it does not rest on systematic research on the phenomena that underlie the assumed mechanisms by which these impacts occur. As a result, it is not clear how much is actually known about the mechanisms and processes of impact. A third criterion has to do with the dependency of the content of the model on a specific field of scientific activity. One would expect the mechanisms by which research leads to societal benefits in health, for example, to be different from computer science. A general working hypothesis for this review is that the arguments brought forth to support the introduction of criteria to assess grant proposals on their potential to produce broader impacts do not reflect the available empirical knowledge of research impact mechanisms. The main areas in which we expect to find this are in recognizing the differences in mechanisms of impact across fields of research and in the degree to which it is reasonable to expect researchers to make predictions on such impacts.

Findings The main outcome of this task will be a list of models, a typology to classify them and an analysis of the phenomena of research impact that the models address as well as the gaps in the phenomena that the array of approaches seems not to cover. It will be accompanied by an assessment of the evidence that is brought to bear on each class of models and their plausibility as actual accounts of research impact mechanisms.

14:00
Philosophical Issues Surrounding ‘Impact’

ABSTRACT. Demands that publicly funded scientific research demonstrate its larger societal relevance have become a political commonplace. In the US, for instance, talk of ‘broader impacts’ at the National Science Foundation appeared in 1997, when the agency changed its criteria for the ex ante review of the approximately 50,000 proposals it receives each year.

The US may have gotten an early start on the development of an accountability culture, but the US now arguably lags in theory of impact assessment. The US has nothing similar to Britain’s 2014 Research Excellence Framework (REF). Of course, on some accounts the REF is no triumph: former NSF Program Director Julia Lane rejects the REF’s reliance on ‘stories’, i.e., case studies, favoring instead employment counts collated by a database called STAR Metrics (Jump 2015).

Nonetheless, an increased focus on impact is common across all nations. But in the pell-mell pursuit of impact we have neglected to do some first order thinking on what precisely we mean by the term. Underlying our accountability culture’s focus on increasing impact is a simple set of assumptions: impact = good, great impact = better. It is time that we stand back and review the concept. For once considered, the pursuit of impact raises as many problems as it seems to solve. In response, this talk offers an epistemology and an ethics of impact.

This critique will be framed in a number of ways. To begin with, it will raise the question of harmful impacts—what are sometime called grimpacts. A moment’s reflection is enough to show the vacuity of the notion that impacts are always beneficial. Take the case of the natural environment: it is clear that in any number of cases (climate change, the loss of biodiversity) humanity is having both too many and too severe of impacts. In the future, progress in the environmental realm will often consist of lessening, eliminating, or even reversing our impact. This raises the possibility of pursuing the goal of what might be called negative impact, where the anticipated impacts of research consist of removing previous impacts.

Moreover, environmental examples like these highlight the implicit monism underlying our talk of impact. Discussions of impact have assumed that the plurality of possible impacts (economic, social, environmental, and cultural; see Donovan’s (2008) ‘quadruple bottom line) all somehow end up pointing in the same positive direction. There is little discussion of the fact that a ‘positive’ economic impact may at the very same time be a ‘negative’ social or environmental impact. So, for instance, continued economic development often comes at a cost to the environment; or the development of driverless cars can threaten the livelihood of the 6 million people in the US who drive vehicles for a living. Or to pick another example, China’s economic ‘progress’ has come at the cost of incredible air pollution.

There is also a Marxist point to be made concerning the notion of impact. Impact and its close kin ‘innovation’ embody the assumption, basic to a capitalist system, of the desirability and indeed the necessity of infinite progress and infinite growth. A ‘healthy’ economy is one that grows at 3% per annum; 2% growth is ‘anemic’ and a steady state economy is considered a disaster. Thus the advocates of increased impact (‘impactors’?) are at least implicitly cornucopian: like Julien Simon, they must embrace the notion that human creativity is the ultimate resource that can trump whatever material limitations we run into. Similar are proactionaries like Steve Fuller (e.g., Fuller 2011), who advocate striving for maximum impact of technoscientific development, figuring we can clean up any possible mess later.

This argument will also explore the question of what should constitute ‘impact’ in the humanities. It it possible to argue that the humanities should not be seen as being about ‘impact’ at all, but rather something more like ‘affect’. By affect I mean a shift from the Newtonian biases of ‘impact’, where something whacks into something else like a car crash, toward more subtle quality of life indices such as personal satisfaction and belongingness. The humanities could then be seen as challenging the tacit economism of impact-talk: learning to appreciate a Picasso painting, a Keats poem, or a walk in the woods is about expanding your imagination and sensitivity to life rather than having an impact on something or someone else. This also suggests that it may be worthwhile to explore the ‘happiness’ literature within sociology (e.g., Lane 2000) on the futility of increased income/economic growth, as a possible check on a simplistically economistic way of valuing life.

The goal of this talk will be to move from simple-minded (and destructive) talk of impact to other wider, more subtle, and less presumptive notions such as effect and affect.

13:30-15:00 Session 15E: Responsible Innovation #2
13:30
Principles to frame Responsible Innovation
SPEAKER: Rider Foley

ABSTRACT. Promoters of emerging technologies often promise clean water, reduced emissions, or food security. Despite the promises, evidence suggests that technological innovation over-emphasizes economization. Responsible innovation (RI) is proposed as an integrated set of normative processes to guide technological innovation. However, scholars often allow normative questions to remain unspoken. I argue for explicit normative objectives as a set of targets for RI. A case study with empirical evidence is used to inform this framework.

RI is becoming recognized as a conceptual and practical framing that attempts to overcome the limitation of reactive governance mechanisms. It calls for distributing responsibilities among private, government, academic, and non-profit stakeholders engaged in shaping emerging technologies. RI conceptualizes a realignment of responsibilities to encourage shared accountability of science with society (Owen et al 2012). Stilgoe et al. (2013) argues that four capacities are requisite for innovation: anticipation, reflexivity, integration, and responsiveness. Despite strong assertions for those normative processes, Stilgoe et al (2013) and Owen et al (2013) sidestep the question of “to what end”? Von Schomberg, on the other hand, calls for a normative target as––“acceptability, sustainability, and societal desirability” (2013, 63). Von Schomberg argues that actors engaged in the innovation should be responsible for articulating these targets. This delegation of responsibility for defining the normative objective burdens actors (at best) or affords an opportunity for powerful, self-interested parties to commandeer the process to affect negative outcomes (at worst). In a way, an implicit assumption that good processes always lead to good outcomes prevails. History, alas, offer many examples of just means leading to unjust outcomes.

The task of articulating principals as objectives to guide actors toward a shared goal is often disregarded. This presentation offers principles for RI, organized with five process principles and four principles as objectives. Process principles concern the activities required to accomplish the objectives and synthesize prominent research. Principles as objectives take up sustainability as a globally recognized principle (WCED 1987) and apply it to innovation to answer the question ‘to what end?’ They are intended to orient governance efforts that aspire to be “responsible.” The process principles are: 1) Anticipation: exploring plausible futures that consider coupled systems to enhance reflection and decision-making capacity. 2) Engagement: interacting in a manner that affords mutual benefit to diverse stakeholders to enhance knowledge sharing before and during decision-making. 3) Reflexivity: inquiring into ones actions, the underlying beliefs and assumptions. 4) Adaptive: aligning with responsiveness, this principle draws from socio-ecological systems literatures on adaptive governance. 5) Collaborative: arranging organizations and individuals involved (or excluded) to define institutions (formal and informal rules) for innovation in decentralized networks.

The objectives are: 1. Socio-ecological viability and integrity: ensuring the stewardship of the planetary resources demands that innovation ought offer a means to enhance resource efficiency and energy efficiency and not detrimentally impact socio-ecological systems. 2. Human flourishing: promoting freedom of expression, freedom from oppression, and equality need to guide technological innovation and must not reinforce social orders that subjugate human beings through oppressive activities. 3. Livelihood opportunities: affording equitable prospects for investment in entrepreneurial ventures and education programs to all persons. 4. Inter- and intra-generational equity: demanding that socio-ecological viability, human flourishing, and livelihood opportunities be considered across scales––i.e. between regions and sectors. The case study offers an assessment of the degree to which contemporary nanotechnology innovation and governance in Phoenix align with the criteria articulated above. Although the principles were unfamiliar to respondents, the results demonstrate the usefulness of the framework to assess contemporary governance regimes relative to RI. The results highlight how current governance paradigms are highly attuned to commercial values (i.e. livelihood opportunity) above other normative values. A myopic focus on economic value often entails trade-offs with public values (Sarewitz & Bozema, 2011) including social equity (Cozzens 2011) and planetary integrity (Rockström et al 2009).

Proponents of RI often avoid normative questions about motivating objectives. I argue that the dance among science, technology, and innovation would benefit from choreography inclusive of normative objectives. Linking processes to normative objectives for RI offers one step toward restoring this alignment.

13:45
An Intervention Research Approach to Responsible Innovation

ABSTRACT. One vision for how scientific and technological advancement might be better linked to societal progress is through the practice of responsible innovation (von Schomberg, 2013). Responsible innovation scholars call for processes to enhance anticipation, reflection, inclusive deliberation, and responsiveness in science and technology research and practice (Owen et al., 2013). The vision for responsible innovation draws from well-theorized bodies of work, including risk governance, anticipatory governance, science policy, and technology assessment, among others.

While responsible innovation literature has developed several visions for what innovation practices could look like, and while there is a growing body of research delving into the outcomes of current innovation governance arrangements (National Research Council 2014), much less is known of how targeted efforts to alter innovation governance effect the outcomes of science and technology innovation (Jaffe 2006). The launch of the science of science policy initiative in the US, with its calls for more systematic understanding of how targeted efforts in policy change the management and outcomes of science and innovation (Jaffe 2006), represents one major effort to address this knowledge gap. Much of the research in this field focuses on description of current and past processes rather than on the introduction of alternative processes to test hypotheses about different approaches to governance of science and innovation.

Three lines of research focus on the question of how to shift innovation systems from the status quo: midstream modulation, transition management, and risk governance. Midstream modulation focuses on building reflexive capacity of laboratory scientists and engineers (Fisher et al., 2006). These activities lay a critical foundation for future reform, but are at present small-scale and not aligned with normative societal objectives. Transition management focuses on long-term, large-scale socio-technical system change (Loorbach, 2010), and combines top-down and bottom-up approaches. Transition management success stories however are drawn almost exclusively from smaller, relatively homogenous nations with centralized control (Lawhon & Murphy, 2012), lending these cases limited generalizability. Risk governance advocates intervention through the change of formal rules and regulations (Kimbrell, 2009). This strong command and control emphasis does not fully consider other ways of dealing with the social costs (Coase, 1960) of technology development or the challenges of regulation in a rent-seeking society (Joskow & Rose, 1989). The intervention research approach I propose addresses these gaps by explicitly accounting for the normative and systemic dimensions of innovation system interventions.

Intervention research articulates the need for, and ways to design, implement, adapt, and evaluate programs of intentional change (Fraser 2010). A unique contribution of intervention research is its systematic study of the effects of programs, policies, and practices designed for intended outcomes. Intervention for responsible innovation benefits from synthesis across knowledge from studies of science and technology, innovation, policy, public health, social work, and behavioral sciences.

I propose five foundational elements of an intervention approach; i. subject of study; ii. the mode of study; iii. the components of inquiry; iv. theories of change and v. implementation, learning, iteration. In this presentation, I define each element, identify the underlying research supporting it, and explain the use of the tenet and of any further subcomponents.

Next, nine questions are presented to help responsible innovation scholars diagnose gaps in the current state of scientific and technological innovation and formulate an intervention design. Questions one through three relate to gaps between the current state of and vision for responsible innovation. Questions four through seven relate to strategic considerations for intervention design. Questions eight and nine relate to anticipation of potential outcomes and reflection on the alignment between activities and objective. Finally, a series of decision criteria are offered to help researchers refine a pool of intervention ideas so as to pick one, or a practicable number, of intervention research studies to conduct. Criteria are intended to be pragmatic, including such concerns as access to partners, strength of collaborations, and potential influence of the program.

This presentation will close with a discussion of the challenges associated with intervention research. These challenges include generating studies with sufficient control or counterfactual cases; distinguishing correlation from causation; objections from more descriptive analytical fields of research; and building interventions based on existing theory and evidence.

14:00
Addressing the Community Engagement Gap in Engineering Education: A Short-Course Approach
SPEAKER: unknown

ABSTRACT. Young and early-career scientists and engineers at universities in developing and developed countries are increasingly working with communities on development projects (Benneworth, 2013; Inman & Schuetze, 2010; VanderSteen et al., 2010). Yet despite this well-intentioned interest, early-career scientists and engineers rarely learn about how to work and engage with communities as part of their undergraduate or graduate curricula (Schneider et al., 2008). Successful inclusion of these skills in curricula would have major implications not only for career development, but also for the health and well-being of communities where scientists and engineers work.

Across three countries and two continents, we have conducted several iterations of a short course on community engagement for early-career engineers and scientists to address this gap. The 16-hour course introduces participants to the complexities and challenges of community engagement and development through an experiential and hands-on approach. The program goals are for participants to be better able to: (a) look beyond technology to see how people, values, and other factors influence and are embedded in technologies; (b) listen to and learn from people about these non-technical aspects; (c) empower communities through a greater understanding of how technology relates to decision-making, managing, planning, and resource use in community and practitioner interactions.

Community Engagement Workshop (CEW) activities are designed to help participants systematically consider the societal dimensions of engineered systems and develop a toolkit of questions and methods for engaging with stakeholders. After two pilot deployments in Atlanta, Georgia, USA, and Cape Town, South Africa, we ran two CEWs in 2014, one in at Concordia University in Montreal, Canada, one in Arizona State University, Tempe, USA. We run a total of 12 activities, ranging from group discussions to role-play to card games to case-study reviews, during the workshop. In addition, three non-facilitator faculty partners who have experience working with communities are invited to share their work and provide examples of community engaged research and practice. Participant learning is evaluated primarily through two pre-post instruments, a project approach questionnaire and a concept map. The project approach questionnaire asks participants to share the actions they would take and questions they would ask when starting a new project. The concept maps capture participants’ mental model of technological systems and whether and how respondents look beyond technology when thinking about such systems. With regard to responsible innovation, the program targets participants’ capacity to embrace the multiple normative perspectives shaping engineering projects, as well as engage in productive collaborations.

Preliminary findings from the project approach instrument indicate that the workshop increases participants’ capacity to look beyond technology when conceiving of engineering projects for community development. Results were determined based on an inter-rater coding process in which researchers asked: does this response account for social aspects of technological systems? If the answer was “yes,” then responses were scored as "1"; if the answer was “no,” then the response was noted but not scored. Total number of actions (responses to 1st question) and questions (responses to 2nd question) were tallied. In response to question one, 83% of participants (15 of 18) had more responses dealing with social aspects after the program. The proportion of responses dealing with social aspects went from 33% (18/54) in the pre survey to 62% (38/61) in the post survey, an increase of 88%. In response to question two, 56% of participants (10 out of 18) had more responses dealing with social aspects after the program, and 61% of participants (11/18) asked more questions after the program. The proportion of responses dealing with social aspects went from 45% (29/65) in the pre survey to 58% (52/90) in the post survey, an increase of 29%. Quantitative analyses of the concept maps, and qualitative analyses of observational data are in process at the time of this writing, but are anticipated to be complete by the time of the Atlanta Conference.

The presentation ends with a reflexive account of the challenges faced in planning and carrying out such a short-course, including designing mutually beneficial collaborations with both technical colleagues and community partners. In addition, possibilities for scaling up these efforts are presented. Preliminary conclusions suggest the program seems an effective means to shift the mindsets of technical students: from one where they see themselves as experts using advanced technologies to fix community problems to collaboratively using science and technology in the service of building capabilities, and creating empowered and resilient communities.

14:15
Changing Perspectives on the Role of Science and Engineering in Society: Science Outside the Lab

ABSTRACT. Science and technology enhance material and physical well being of individuals in unprecedented ways, yet persistent societal inequities and environmental degradation shed doubt on the ability of modern science and technology alone to advance broad-based societal progress. The training of scientists and engineers to filter out subjective, societal concerns in pursuit of pure science may be one factor that perpetuates a divide between technological advance and societal progress (Woodhouse & Sarewitz 2007). Alternative training programs may provide science and engineering graduate students a better way to grasp and eventually work to restore the links between technological advance and societal progress.

Science Outside the Lab (SOtL) is a decade-old policy immersion program in Washington, D.C. that over two-weeks invites policy analysts, lobbyists, business people, and decision makers from across the political spectrum to discuss their work with science and engineering graduate students. These graduate participants face conflicting realities presented by the special interests jockeying for the future of science and technology, and they experience how these societal concerns are inherent in scientific pursuits. The intended program learning outcomes are for students to better: 1) Grasp the complex landscape influencing and influenced by science policy, and 2) Appreciate the diversity of relationships among individuals and entities engaged in shaping science policy.

This presentation discuses the results of a program evaluation that asked: ‘Does encountering science outside the lab influence participants’ awareness of how science and technology policy shapes the relationships between science and society? If so, how?’

The research team crafted three assessment tools to better understand participants’ perspectives on the role of scientists and engineers in society, as well as the role of information and values in shaping science policy, before and after the program. The three tools were: a pre-post Likert scale survey to gauge participant perspectives (scaled from 1, strongly disagree, to 5, strongly agree); a series of rapid (“burst”) reflections to assess emotional responses to discussions and activities; and a pre-post concept mapping exercise on science policy

The Perspectives Survey presents participants with a series of questions, developed from a literature review and expert testing, regarding their views on the role of trained scientists and technical experts in science policy (Pielke 2007). Changes in perspective are detected as statistically significant differences in pre-post response on questions, as well as statistically significant differences between question pairs relating the perceptions of ‘self’ vs. ‘others or peers’ in the science policy process. Results from the set of completed pre- and post- surveys of two SOtL program cohorts in 2014 (n=14) are presented.

For “burst” reflections, SOtL participants were asked to write five words that came to mind after each discussion or activity. Recorded words were entered into a program that parses a large, English-language database (Warriner et al., 2013) and reports a three-part score for the affective, or emotional content for the word. The three emotional dimensions scored are valence (happiness), arousal (excitement), and dominance (feeling of being in control). The top 10 most recurrent words for both SOtL cohorts emphasized content words like ‘budget’ and ‘policy’, but also consisted of adjectives like ‘informative’ and ‘interesting’, and verbs like ‘engage’. The content words and themes correlated with those obtained from the concept mapping exercise.

With the third analytical tool, participants were tasked with constructing a concept map (Novak 1990; Ruiz-Primo et al., 2001) of the people, organizations, and other entities they think are involved in shaping science policy. These were completed as the first, and last activities of the program. Maps were seeded with a central node of ‘science policy.’ Post-workshop, participants created more complex, and highly linked maps, suggesting an increased understanding of science policy. Additionally, the increase of bidirectional or outward links from ‘science policy’ in the post-workshop maps may suggest that participants increasingly appreciate how science policy is socially constructed rather than pre-determined.

Preliminary conclusions from this intervention study are that, after participating in SOtL, science and engineering graduate students increasingly appreciate the plurality of voices and values involved in shaping science policy priorities. Future research will seek to investigate the perspectives of a control population of peers, assess whether and how changes in SOtL participants’ perspectives and understanding persist, and study if these changes manifest in actions and decisions over time.

13:30-15:00 Session 15F: University Industry Ties #1
Chair:
13:30
Differences in Science Based Innovation by Technology Life Cycles: The case of solar cell technology

ABSTRACT. There are many studies that deal with the importance of the role of the science sectors, such as universities and public research institutions, in the national innovation system. The contribution to technological innovations in industry and economic growth by scientific research (university research) activities themselves has been identified. However due to its multi-faceted nature, technology transfers from the public research sector to the private one are not so simple as in-sourcing of ready-made technological contents to be plugged into innovation processes at firms. Gilsing et al. (2011) reviewed this nature by the type of transfer mechanism, i.e., either indirect knowledge flow through publications and patents or direct interactions between universities and firms by joint research programs. It was found that the former mechanism is relevant in “science-based regimes”, where the nature of scientific knowledge is basic, while the latter mechanism is important for “development based regimes”, based on more applied knowledge jointly created by universities and industry. This study is based on the past literature of cross industry that looks at the nature of innovation, and is rooted in a seminal paper on the taxonomy of innovation by Pavitt (1984), but we are looking at the contribution of scientific findings to industrial innovation by the technology life cycle (TLC) of a specific product, solar cells. The concept of TLC is based on the technology evolution within certain industries or product categories over time. An emergence of a new product often comes with a breakthrough or a radical innovation which makes technological discontinuity. In Utterback’s seminal work, presenting the Dynamics of Innovation Model, this first phase of TLC is called “fluid”, where product innovation dominates and a variety of products and technologies are introduced to the market. Then, in a process of market competition by a variety of technologies, a dominant design, i.e., the winner of a market competition, gradually emerges. This phase is called “transitional” where the transition from product innovation to process innovation can be observed. After the dominant design is determined, the TLC moves to “specific”, where incremental innovations based on the dominant design drive market competition. In the specific stage, process innovation to improve product performance becomes important. In order to control for potential bias in patent citation indicators by the differences in the institutional framework related to UIC across time periods and countries, we focused on one country and a particular time period. As for country selection, we used patent data filed by Japanese firms and public research institutions including universities, since Japanese applicants have the largest share of patents in our datasets as shown in the previous section. In addition, we have identified that all of the top 10 applicants are Japanese firms such as Sharp Corporation, Canon Inc., and Panasonic Corporation. As for the time period, we used the data from 1998 to 2007. This period starts in 1998 when active Japanese UIC policy began with the introduction of the Technology Licensing Organization (TLO). For this time period, we compared silicon type (already in the transitional phase) and dye-sensitized solar cells (still in the fluid phase) to see the differences by TLC. Our empirical analysis suggest that it is valuable to pay attention to UIC’s potential to contribute to the creation of commercially important inventions in the later stage of TLC, but not in the earlier stage where broadening the technology scope is important. In evaluating the UIC policy program, one should take into account the heterogeneous nature of UIC activities. We should evaluate the UIC not only by judging the value of outputs created through it, but also by recognizing the effect on the capability building of companies. Both at the earlier and later stages, UIC seems to have positive impacts on the capability of companies. Therefore, it might be effective for policy makers to promote UIC further as a capability building opportunity as well as an output enhancement opportunity in order to promote solar cell innovation and other innovations. For the companies, it is also valuable to utilize UIC strategically as a capability building opportunity as well as an output enhancement opportunity, and there might be more chances to apply UIC to build the competitive advantage especially in the later stage of TLC. For both the fluid and transition phases, UIC activities are important for absorptive capacity building, but in different ways. In the earlier stage, the major objective of UIC activities is to create technology acquisition and assimilation capability, while in the later stage, using researchers with UIC experience is helpful for enhancing transformation and exploitation capability.

13:45
Innovation, skills, and creativity in Norwegian enterprises
SPEAKER: Mark Knell

ABSTRACT. Background and research question. The creation, transfer and use of new knowledge depend critically on supporting and cultivating creativity and skills within the enterprise. This idea of innovation dates back to at least the time of Adam Smith who explained how skills and creativity could lead to higher productivity through a more sophisticated division of labor. As the division of labor evolves into new and different tasks, some tasks will appear routine and require little knowledge, while others may be knowledge intensive and require certain cognitive, learning and creative capacities. In a similar way, Schumpeter provided the entrepreneur with the creative ability to combine already existing knowledge in different ways. The idea of creative destruction meant that old products, processes and organizational methods were destroyed and replaced by new ones. What we sometimes call the knowledge economy, parallels the ICT revolution, and blurs the line between routine tasks and knowledge intensive tasks.

This paper is mainly about how creativity and skills within the enterprise can bring about different types of research and innovative activities. It focuses on certain creative activities within the firm and how they might affect the innovation process. It is not about the creative industries per se, but on how enterprises gain access to relevant creative skills and stimulate new ideas or creativity among its staff. Innovation and creativity are highly related, but the main premise of this paper is that innovation uses creativity by turning creative ideas into economic use as new products, processes organizational practices, and marketing strategies. The objective will be to demonstrate whether different methods to stimulate new ideas and creativity are successful or not and whether they lead to new research, or product, process and other types of innovation. It will also consider whether these methods are relatively more important in enterprises of a certain size, particular industry and form of ownership.

Methodology. The Norwegian survey on R&D and innovation of business enterprises for 2010 contains a unique set of questions on creativity and skills. It was a compulsory survey that sampled all firms with a population of at least 50 employees, and drew a stratified random sample covering about 35 per cent of those firms with 5-49 employees. Almost 6,600 enterprises responded to the survey, which is ideal for econometric modeling. Unlike labor force surveys, this survey asked whether the enterprise employed eight different skills whether in-house or obtained from external sources and whether they successfully used one of six different methods to stimulate new ideas or creativity among the staff. The skills included graphic arts, product design, multimedia activities, web design, software development, market research, engineering, or statistics and database management. And the methods to stimulate creativity included brainstorming sessions, cross-functional work teams, job rotation, financial incentives, non-financial incentives, and training activities that stimulate new ideas or activities. Many different combinations can be made between these variables and traditional measures of R&D and innovative activity

A binary response model (in this case a probit model with a maximum likelihood estimator) together with descriptive statistics is ideal for capturing information contained in the Norwegian survey on R&D and Innovation as virtually all of the questions on R&D and innovative activities as well as creativity and skills can be put into this format. Following Schumpeter, the CIS 2010 includes four types of innovation that the model wishes to explain: 1) new or significantly improved products; 2) new or significantly improved production processes; 3) new organisational methods; and 4) new marketing concepts or strategies. Independent variables include novelty, creativity and skills, among other determinants contained in the questionnaire.

Conclusions. Preliminary findings indicate that about 12 per cent of the innovative firms report they have been successful at using brainstorming sessions; almost 10 per cent were successful at using non-financial incentives for employees; just over 10 per cent supported training in how to develop new ideas and creativity; almost 10 per cent created multidisciplinary or cross-functional work teams; 8 and a half per cent were successful at using job rotation or staff; and more than 7 per cent found financial incentives to develop new ideas to be important. The paper will provide conclusive results on the influence of the creative process in Norway on its innovative and research potential.

14:00
The virtue of industry-science collaborations

ABSTRACT. This article analyzes the potential benefits of industry-science collaborations for samples of Flemish and German firms. A firm collaborating with science may benefit from knowledge spillovers and public subsidies as industry-science collaborations are often granted preferred treatment. I shed light on the potential spillover and subsidy effects by estimating treatment effect models using nearest neighbor matching techniques. For both countries, I find positive effects on business R&D. Firms that engage in industry-science collaborations invest more in R&D compared to the counterfactual situation where they would not collaborate with science. Furthermore, within the sample of firms collaborating with science, a subsidy for collaborating with a public research institution leads, on average, to higher R&D in the involved firms. Thus there is no full crowding out of subsidies targeted to science-industry collaborations.