ATLC25: 2025 ATLANTA CONFERENCE ON SCIENCE AND INNOVATION POLICY
PROGRAM FOR FRIDAY, MAY 16TH
Days:
previous day
all days

View: session overviewtalk overview

08:30-10:00 Session 15A: Technology and Economic Performance
Location: Room 233
08:30
Forecasting AI Adoption Capabilities in Low- and Middle-Income Countries: Benefiting from Absorptive Capacity Framework

ABSTRACT. I. Introduction

Artificial intelligence (AI) has potential to transform economies, particularly low- and middle-income countries (LMICs), where technological advancements can contribute to achieving Sustainable Development Goals (SDGs). However, quantifying AI adoption capability in LMICs remains difficult due to limited data and the nascent stage of AI ecosystems in these regions. Building on the concept of absorptive capacity, this study proposes a novel framework to forecast AI adoption capability in LMICs. Absorptive capacity, initially conceptualized at the firm level, describes a firm’s ability to learn from external knowledge inflows and convert that learning into productive outcomes.(Cohen and Levinthal 2000, Strategic Learning in a Knowledge Economy). Khan extended this framework to LMICs’ contexts (Khan 2022, Structural Change and Econ Dynamics), emphasizing that LMICs can act as learning nations in a globalized world. In their latest work, authors further theorized this concept in Low Income Countries (LICs) for AI adoption (Khan, Umer, and Faruqe 2024; Humanities and Social Sciences Communications). Building on previous work, this research forecasts AI adoption capability using absorptive capacity as the theoretical lens, focusing on external inflows (e.g., high-tech imports) and domestic readiness factors (e.g., internet penetration and human capital).

Initially, we forecast AI capability in LMICs based on external knowledge inflows, using proxies such as imports of AI-related hardware (e.g., computers, and other technologies). This provides a baseline forecast of AI potential derived from external resources. In the next phase, this analysis is supplemented by considering domestic factors that influence a country’s ability to internalize and effectively utilize these inflows. Key factors include internet infrastructure, human capital (such as the education and technical skills of the workforce), and institutional readiness. By integrating both external inputs and domestic abilities, we produce a more nuanced forecast of AI capability that reflects external opportunities and internal readiness. This two-step approach allows us to model AI potential in LMICs even with limited direct data on AI development.

This study extends the absorptive capacity framework to AI adoption in LMICs, providing a novel lens to understand technological diffusion in resource-constrained settings. It offers a methodology to forecast AI adoption capabilities, aiding policymakers and development organizations in prioritizing AI investments. By aligning external inflows with domestic readiness, the study addresses global digital inequities, fosters economic development, and supports LMICs' meaningful engagement in the global AI ecosystem.

II. Research Questions This study seeks to address the following key question(s):

1. How can absorptive capacity be operationalized to model and forecast AI adoption capability in LMICs? This entails the following two sub-questions: i. What role do external inflows, such as high-tech imports, play in enabling AI adoption in LMICs? ii. How do domestic readiness factors, including internet infrastructure, STEM education, and governance quality, interact with external inflows to influence AI adoption?

III. Research Hypotheses Based on the research questions, this study tests the following two hypotheses:

i. Countries importing more AI-related inflows will exhibit higher AI capability. ii. AI-related inflows interact with domestic readiness factors to enhance AI capability.

IV. Methodology To answer the questions and test our hypotheses, we employ a robust time series forecasting approach using quantitative methods:

1. Data Sources, time period and sample: o External Knowledge Inflows: High-tech imports, specifically AI-related high-tech (ICT) goods (6-digit HS code level data), sourced from UNCTAD and BACI (CEPII) databases. o Domestic Readiness Factors: Internet penetration, STEM education enrollment, and institutional readiness indicators, Patent Data from the World Development Indicators (WDI) and other international databases such as patent data. o Time Period: 12 years (2010–2022) to capture trends and technological progress over time. o Sample: LMICs as defined by the World Bank in FY 2022-23.

2. Empirical Approach: o Time Series Forecasting:  Univariate Forecast: Analyze trends in high-tech imports as the baseline measure for AI-related hardware inflows.  Multivariate Forecast: Incorporate domestic readiness factors to model their interaction effects with high-tech imports. o Statistical Models:  Autoregressive Integrated Moving Average (ARIMA) models for baseline forecasts of high-tech imports.  Vector Autoregression (VAR) models to capture interdependencies between high-tech imports and readiness factors.  Dynamic panel data models (e.g., System GMM) to exploit temporal variation and address endogeneity o Interaction Effects: Model the interaction between high-tech imports and domestic readiness factors (e.g., high-tech imports × internet penetration) to assess how these factors collectively influence AI adoption capability.

3. Validation: o Forecasted capabilities will be compared against real-world trends in AI-related outputs, such as publications, patents, and startup activity, to ensure robustness.

V. Preliminary analysis:

V.1. AI-related Hardware:

Using our own understanding and, subsequently, leveraging generative AI, we identified 23 categories of AI-related hardware from UNCTAD and BACI (CEPII) databases.

V.II. Import data:

From the imports data (BACI (CEPII)), it is evident that most LMICs have significantly higher imports than exports of AI-related hardware. However, countries like Vietnam and the Philippines, exhibit higher AI-related hardware exports than imports, likely due to their growing roles as emerging production hubs for high-technology products. Exploring trends in these and other countries would provide more definitive insights into whether AI-related hardware imports can serve as a reliable proxy for trade-based measurement of AI adoption.

V.IV. Some findings from analyses:

Pending execution of our empirical approach, our analysis may suggest that AI-related imports serve as a robust proxy for AI advancement. We will also know how much readiness factors mediate the impact of external inflows on AI adoption capability. For example, countries with higher internet penetration and better STEM education systems may exhibit stronger utilization of imported AI-related technologies. Without adequate readiness, the potential benefits of external inflows perhaps could be diminished. Furthermore, by analyzing historical trends and interaction effects, we anticipate projecting that LMICs with higher absorptive capacity are likely to achieve significant AI adoption capability by 2030.

VI. Conclusion

This study provides a forward-looking framework to forecast AI adoption capability in LMICs, emphasizing the interplay between external knowledge inflows and domestic readiness. By operationalizing absorptive capacity at the LMIC-national level, it offers actionable insights for fostering AI-driven growth in resource-constrained settings, advancing digital equity, and promoting sustainable development.

08:45
Understanding the Innovation and Economic Growth Nexus: A Dynamic Network Data Envelopment Analysis Approach based on National Capabilities

ABSTRACT. 1. Background and rationale Does the capability to create economic performance steadily increase in a country with more excellent capabilities in creating innovative outcomes from R&D investments? As Romer (1990) points out in his endogenous growth theory, modern economic growth is achieved through continuous R&D investments, such as the development of new technologies and new ideas. In particular, the more active a country is in innovation through continuous R&D investments, the more likely its corresponding economic performance will steadily increase (Hasan and Tucci, 2010). R&D investments are an important foundation and key determinant of steady economic growth and productivity gains (Griliches, 1979; Furman et al., 2002; Romer, 1990). However, no clear consensus has yet been reached when examining the results of existing empirical analyses (Guloglu and Tekin, 2012; Gumus and Celikay, 2015; Pala, 2019; Hammar and Belarbi, 2021). Such mixed results are likely due to a lack of use of cross-country panel data that can identify the dynamics of national capabilities over a long period of at least 20 years, as well as the application of analytical models that can capture the nonlinearity between innovation and economic growth. This study aims to understand the relationship between innovation and economic growth from the perspective of the national innovation system (NIS) and Resource-based View (RBV) on this delay (Castellacci & Natera, 2013; Dutta et al., 2004). No matter how much R&D investment is made, the ability to convert such R&D investment into innovation results, that is, R&D capability, may be low due to a lack of skilled research personnel or insufficient research infrastructure. Furthermore, even if R&D capability is superior to other countries, if it is not supported by economic capability that translates into economic performance, a lag between innovation and economic growth may occur, as suggested by the Swedish paradox. This lag phenomenon can be approached from the NIS and RBV. According to these arguments, the total capability of the NIS can be classified into R&D capability and economic capability (Lu et al., 2016). R&D capability can be understood as a concept corresponding to innovation, and economic capability can be understood as a concept corresponding to economic growth. R&D capability can mature under the support of economic capability, and vice versa; it can also hold. The co-evolution process of these two capabilities can be nonlinear due to diminishing returns to R&D investment scale and the declining imitation effect of technology-following countries on technology-leading countries over time (Castellacci and Natera, 2013; Coccia, 2018). In this study, we empirically demonstrate that economic capability does not necessarily grow linearly as R&D capability increases. This perspective can contribute to broadening our understanding of the relationship between innovation and economic growth.

2. Methods To address the research question, we constructed a balanced panel dataset encompassing 52 countries over 24 years (1995–2018), including 37 OECD countries and 15 developing nations, including China, Indonesia, and Thailand. We utilized the Multiple Imputation method to handle missing data commonly encountered at the country level. The countries were classified into two groups—technology leaders (15 countries) and technology followers (37 countries)—based on per capita GDP and innovation activity, as measured by U.S. registered patents. Using this dataset, we applied a two-stage approach with the Non-oriented Return-to-Scale Dynamic Network Slack-Based Measure of Super-Efficiency (DNS-SBM) model, which integrates super-efficiency into the framework developed by Tone and Tsutsui (2014) as a non-oriented, variable returns-to-scale efficiency model. This model captures the network structure of NIS, allowing us to analyze their subcomponents rather than treating them as a single “black box.” The DNS-SBM model offers three key benefits: first, it incorporates super-efficiency, addressing monotonicity issues among decision-making units; second, it reflects carry-over effects, capturing the cumulative and delayed effects of R&D investments; and third, it can track long-term changes in the subcomponents of total NIS capability. For comparison, we also employed the Non-oriented Return-to-Scale Window Network Slack-Based Measure of Super-Efficiency (WNS-SBM), which combines super-efficiency and Window-DEA based on the model developed by Tone and Tsutsui (2009). To analyze the impact of total NIS capability on the per capita real GDP growth rate, we utilized the per capita real GDP growth rate as the outcome variable. We examined the determinants of the per capita real GDP growth rate across the total NIS capability scores from the DNS-SBM model and economic growth-related control variables as independent variables. Additionally, to identify whether catch-up effects or technological advancements drive productivity improvements by country type, we utilized the Global Malmquist Productivity Index (GMPI) developed by Pastor and Lovell (2005).

3. Results The DNS-SBM model revealed that the total NIS capability for technology leaders (0.636) was significantly higher than for technology followers (0.506), with the Mann-Whitney test confirming statistical significance at the p<0.05 level. The results highlight the importance of accounting for carry-over effects in measuring the total NIS capability. When analyzing the trends in the subcomponent capabilities that comprise the total NIS capability from 1999 to 2018, the DNS-SBM model revealed a steady increase in R&D capability, rising from 0.260 in 1999 to 0.628 in 2018. However, economic capability declined from 1.505 in 1999 to 0.914 in 2018. Notably, while R&D capability for technology-following countries increased, their economic capability fell more sharply than that of technology-leading countries. This trend suggests a diminishing imitation effect, traditionally viewed as an advantage for latecomers. The findings of the Global Malmquist Productivity Index analysis further validated this trend.

4. Significance and implications Policymakers should recognize that an increase in R&D capability does not necessarily lead to a corresponding increase in economic capability. Adopting an integrated approach that connects R&D and economic capability is essential to converting innovation outputs into economic growth. This approach requires institutional mechanisms to support technology commercialization, which can help transform innovation outputs into economic added value. This study shows that the total NIS capability and institutional factors, such as venture capital utilization, statistically significantly impact the per capita real GDP growth rate. Furthermore, policymakers should focus on managing semi-fixed input factors, such as knowledge and capital accumulation, subject to carry-over effects.

09:00
Empirical Timing of National Technological Capability Transition to escape the middle-income trap
PRESENTER: Sungjun Choi

ABSTRACT. In the fields of innovation and growth, a country's technological capability has long been recognized as a key endogenous driver of its progression toward a more complex and sophisticated economy. While a substantial body of literature has examined technological capability, studies from the perspective of national innovation systems often categorize it into distinct types, such as production capability, investment capability, and innovation capability; production capacity and technological capability; know-how and know-why; and implementation capability and design capability. Many of these studies implicitly assume a sequential relationship among these types of capabilities. This research explicitly highlights this sequence, arguing that there are two critical stages of technological capability necessary for national economic development. Furthermore, it empirically investigates the timing at which transitions between these stages are required. The study utilizes trade and patent data from over 100 countries spanning the past 30 years. By applying the economic complexity framework, it links the product space and technology space and defines an innovation capability related with the product and technology space for measurement. The key findings reveal that, until a country’s economic complexity index (calculated from export data) reaches a range of approximately 0.8 to 1.0, most countries exhibit near-zero “related innovation capabilities.” However, achieving a higher level of economic complexity beyond this threshold critically depends on the development of “related innovation capabilities” derived from the product and technology space based on patent data. When this threshold is translated into GDP per capita, it corresponds to approximately USD 10,000–15,000, which is often the stage where countries like Malaysia and Mexico aim to escape the middle-income trap. The newly proposed metric, “related innovation capabilities,” provides insights into the level of innovation capability necessary for a country to transition into a more advanced economic stage. When analyzed alongside the economic complexity index, it allows for the identification of critical moments when a shift from production capabilities to innovation capabilities becomes essential. This research contributes to the literature by offering robust quantitative evidence that supports the transition tale of technological capabilities conceptualized in various ways by previous studies. Moreover, identifying the timing for such transitions underscores that a one-size-fits-all approach to science and technology policy is inadequate. Instead, it offers practical guidelines for policymakers to design tailored long-term national strategies. Although trade and patent data have inherent limitations, this approach can be extended to analyze other proxies for capabilities or applied to specific industries, such as green technology or the automotive sector.

09:15
Capital Gains Tax and Firm Innovation

ABSTRACT. Government tax policies are critical determinants of firms’ financial strategies, influencing their ability to acquire and manage capital. Prior research has extensively examined how tax policies such as corporate tax and R&D tax credits impact corporate behavior (Djankov et al., 2010; Hall & Van Reenen, 2000). Notably, R&D tax credits have been shown to enhance firms’ innovation activities by reducing the cost of research and development investments (Hall & Van Reenen, 2000; Czarnitzki et al., 2011). However, recent studies have highlighted that R&D tax credits disproportionately benefit large, profitable firms, often leading to polarization in innovation activities. Large firms tend to focus on exploratory innovation, while small and medium-sized enterprises (SMEs) primarily engage in exploitative innovation (Balsmeier et al., 2024).

Beyond direct corporate taxation, payout taxes—dividend tax and capital gains tax (CGT)—are increasingly recognized for their influence on firms’ strategic decisions. Payout taxes directly affect shareholder returns, which in turn shape firms’ access to external capital. While the literature has extensively studied dividend taxation (e.g., Chetty & Saez, 2005; Yagan, 2015; Becker et al., 2013), CGT remains underexplored, despite its growing relevance as firms increasingly prefer share buybacks over dividends. Recent research by Moon (2022) underscores the importance of CGT, demonstrating that reductions in CGT stimulate corporate real investment. Building on this foundation, our study investigates how CGT reductions influence firms’ innovation activities and strategic directions, particularly by enhancing financial flexibility and enabling greater investment in R&D and high-quality innovations.

To empirically assess the impact of CGT reductions on innovation, we leverage a 2014 policy change in South Korea, where firm classification criteria for CGT rates were redefined. Before 2014, classification was based on sales, employees, capital, and assets, but the reclassification simplified the criteria to sales only, with industry-specific thresholds. Firms previously classified as medium-sized enterprises but reclassified as small and medium-sized enterprises (SMEs) under the new criteria benefited from lower CGT rates (treated group), while firms remaining classified as medium-sized did not experience tax adjustments (control group). Our analysis uses panel data from 2010 to 2019, comprising 2,020 unique firms and 19,525 observations. The treated group includes 227 firms (2,182 observations), while the control group consists of 1,793 firms (14,343 observations). Employing a difference-in-differences approach, we analyze how CGT reductions affected the quantity and quality of firms’ innovation activities.

The results demonstrate that CGT reductions significantly enhance both the quantity and quality of innovation. Firms subject to CGT reductions filed 7.1% more patents than those in the control group, relative to pre-reform levels. Moreover, innovation quality, measured by forward citations, improved by 3.5%, with the most notable increases observed in breakthrough innovations with significant technical impact. Interestingly, while the overall number of unsuccessful inventions remained unchanged, the increase in innovation activity was concentrated in high-impact innovations, suggesting that CGT reductions particularly stimulated quality-driven innovation.

To further explore the mechanisms underlying these effects, we conducted additional analyses. First, we examined changes in firms’ R&D expenditures to determine whether enhanced financial flexibility led to increased investment in innovation. The results confirm that CGT reductions facilitated increased R&D spending, indicating that firms expanded their innovation budgets rather than reallocating existing resources. Second, we analyzed the differential impact of CGT reductions on listed versus unlisted firms. Listed firms, which have greater access to external capital through stock and bond markets, accounted for most of the observed increase in patent activity. This suggests that financial accessibility is a key channel through which CGT reductions influence innovation. Third, we investigated the role of firms’ cash holdings. The findings reveal that cash-rich firms experienced a more pronounced increase in innovation outputs following the CGT reduction, while cash-constrained firms did not. These results align with Moon (2022), who argued that cash-constrained firms prioritize tangible asset investments over R&D when acquiring external capital. Our findings highlight the importance of internal financial resources in enabling firms to leverage CGT reductions for innovation. Finally, we examined the balance between exploration and exploitation in firms’ innovation strategies. The results indicate that CGT reductions encouraged ambidextrous innovation, with firms pursuing both the exploration of new technologies and the exploitation of existing ones. This balanced approach likely contributed to the observed improvements in both the quantity and quality of innovation outcomes (Andriopoulos & Lewis, 2009; Lin et al., 2013).

This study makes three distinct contributions to the literature. First, it extends research on CGT by documenting its novel role in shaping corporate innovation strategies. While prior studies have primarily focused on CGT’s impact on real corporate investment (Moon, 2022), our findings demonstrate that CGT reductions significantly influence firms’ innovation activities, particularly by fostering high-impact, quality-driven innovation. Second, this study contributes to the literature on financial constraints and innovation by providing new evidence that tax policies can enhance financial flexibility and, consequently, innovation outcomes. Building on the work of Brown and Petersen (2011), who emphasized the role of cash holdings in stabilizing R&D investments, we show that CGT reductions particularly benefit cash-rich firms, enabling them to undertake ambitious and costly innovation projects. Third, we bridge two independent streams of research: payout taxes and corporate innovation. Traditionally, the finance literature has examined shareholder return policies, while innovation studies have focused on the determinants of corporate innovation. By integrating these perspectives, we identify a novel channel through which shareholder taxation influences firms’ innovation outcomes. This integrative approach provides a comprehensive theoretical framework linking shareholder return policies, financial flexibility, and corporate innovation performance.

In summary, our findings underscore the critical role of CGT reductions in enhancing firms’ innovation activities, particularly by promoting financial flexibility, enabling ambidextrous innovation strategies, and fostering high-quality, impactful innovations. These insights have important implications for policymakers seeking to design tax policies that support corporate innovation and long-term economic growth.

08:30-10:00 Session 15B: Transformation in the Lab: AI, Automation, and Digitalization
Chair:
Location: Room 236
08:30
Bridging or Widening the Gap? The Role of AI in Shaping Global Research Performance

ABSTRACT. ***Submitted as part of the thematic panel “Transformation in the Lab: AI, automation, and digitalization.”***

The rapid development of generative AI (GenAI) technologies has the potential to transform scientific research practices, with enhanced capabilities for data generation, processing, and analysis. However, there is limited understanding of how GenAI adoption will influence existing disparities in scientific productivity between the Global North and the Global South, regions historically characterised by significant inequalities in access to research resources, funding, infrastructure, and technological tools. This study aims to investigate the impact of GenAI adoption on scientific practices, comparing the Global North and Global South and assessing whether GenAI adoption is reducing or widening the gap in research performance between these regions.

GenAI has introduced new efficiencies and methodologies that allow for more rapid and sophisticated knowledge production. It holds the potential to overcome manpower, infrastructural and funding limitations by enabling access to cutting-edge data analysis capabilities and facilitating more efficient workflows, which is more beneficial to Global South where resources may be more limited. On the other hand, the increasing use of GenAI also raises critical concerns about accessibility; advanced AI tools often require significant computational resources, stable internet infrastructure, and institutional support, factors that may be readily available in the Global North but are scarcer in many Global South contexts. This disparity could lead to unequal benefits from GenAI and affect researchers in different regions differently.

The primary research questions of this study are: 1) How has the adoption of AI and GenAI tools influenced the research practice globally? 2) Has GenAI adoption widened or narrowed the research performance gap between Global North and Global South? To answer these questions, we start by identifying publications involving AI and GenAI based on targeted keywords, and mapping research activities by institution, country, and region. Research practice will be examined across four dimensions: research output (number of publications), research impact (citation counts and journal reputation), research topics (disciplines and thematic focus), and collaboration pattern (number of coauthors). Subsequently, we compare these research practices across regions, with a particular focus on differences between the Global North and Global South.

Methodologically, we conduct a comprehensive bibliometric analysis using a dataset of 1.6 million publication records during the period from 2021 to 2024. We divide this dataset into two phases: 2021-2022, capturing early AI-related research through targeted keywords; and 2023-2024, incorporating both AI and GenAI-related keywords to reflect the post-2023 GenAI adoption surge. Data is sourced from Web of Science’s Science Citation Index Expanded (SCIE) and Social Sciences Citation Index (SSCI) databases. These databases provide extensive coverage across disciplines, enabling a robust analysis of publication patterns on a global scale. To classify countries into the Global North and South, we follow the OECD definitions, using the affiliation of the corresponding author as the primary indicator of regional classification and the first author’s affiliation as a robustness check. The timing of this research is particularly relevant, as the rapid expansion of GenAI use in academia since 2023 provides an opportunity to empirically investigate these questions during the early stages of GenAI diffusion across the global research landscape. This study builds upon existing literature on the global digital divide in research and the impact of AI on scientific productivity. Our contributions are twofold. First, by tracking the GenAI adoption and quantifying regional disparities, we contribute to ongoing discussions about technological equity and the future of global scientific practice, specifically focusing on the role of AI in potentially reinforcing or reducing existing inequalities. Second, by examining multiple facets of research output, our study offers a comprehensive understanding of how AI adoption affects various aspects of scientific work.

The findings from this study have important policy implications. If GenAI adoption indeed widens the productivity gap, targeted interventions may be necessary to support AI access and infrastructure development in the Global South. By identifying both the opportunities and challenges associated with GenAI in global research, this study aims to inform strategies that promote a more equitable distribution of technological resources and benefits in scientific practice worldwide.

08:45
Balancing Bytes and Beakers: Skill change in the digitalisation of industrial science

ABSTRACT. Submission as part of the proposed panel: TRANSFORMATIONS IN THE LAB: IMPLICATIONS OF AI, AUTOMATION, AND DIGITALIZATION IN SCIENCE

The purpose of this paper is to investigate skill change in industrial science in the context of digitalisation and automation. Industrial scientists mostly conduct applied research to address their firm’s market needs and financial goals (Aghion et al., 2008; Perkmann et al., 2019) such as developing new products for commercialisation (Agarwal & Ohyama, 2013) or solving particular problems faced by their companies (Shapin, 2008; Perkmann et al., 2019). The skills required by industrial scientists are increasingly influenced by the digital transformation of scientific practice, this includes the use of advanced laboratory automation, such as robotics, which is tasked with conducting large-scale, everyday manual tasks such as pipetting and assaying, while software performs virtual experimentation, data analytics and in silico or computer modelling. These developments include the increasing use of artificial intelligence (AI) (Lamb and Davidson, 2005; Olsen, 2012; Riberio et al, 2023).

We draw on a study conducted in a large UK firm to shed light on the transformations in the ‘texture of work’ of scientists. We collected data through 57 semi-structured interviews with scientists and managers between 2019-2021 as part of a larger project focused on the adoption of new technologies by scientists and associated managerial strategies in the context of organisational and technological change. The firm employs over 300 scientists working on the design, formulation and testing of fast-moving consumer goods for the hygiene and personal care markets. The company has made a major investment in automation and digitalisation of R&D with the aim of “better, cheaper, faster” in silico first new product development (i.e. developing new products primarily by means of computer modelling or computer simulation rather than physical experimentation). The adoption of digitalisation is posing skills challenges for the company as new skill needs have emerged not least in data analytics and automation engineering. The company’s response has been the recruitment of new staff and the upskilling of existing staff through formal training programmes and on-line training resources.

We critically engage with labour process theory (LPT) and contribute to the upskilling-deskilling debate in management and the sociology of labour literature (Omidi et al. 2023). The traditional deskilling hypothesis suggests that new technology leads to the breaking down of complex skilled work into simple unskilled tasks that reduce the autonomy of workers. We find that the traditional deskilling hypothesis is limited and that, faced by new digital technologies and automation, scientists simultaneously experience deskilling and upskilling. We also note that automation and the growing importance of multi-disciplinary teams are impacting the autonomy of individual scientists. Further, we observe that self-guided and experiential learning plays an important role in digital skills development.

09:00
Searching for theory? Researchers’ perspectives on artificial intelligence and machine learning in manufacturing and materials science research

ABSTRACT. For proposed panel: "Transformations in the Lab: Implications of AI, automation, and digitalization in science"

Artificial intelligence (AI), including machine learning (ML), has been heralded as transformative for scientific research and development. Amidst a long-run decrease in ratios of economic growth to scientific investment, and concerns about quality and replicability in science, proponents argue that AI tools will accelerate scientific discovery, technological development, economic growth, and the development of solutions to global challenges.

In science, AI technologies are increasingly being applied to experimentation, data collection and analysis, and automated lab operations, as well as to scientific writing and proposal development. But will such changes increase the rate of scientific progress? AI-enabled automation promises to increase laboratory throughput because smart machines can perform production or processing tasks more quickly than humans. However, lab automation can amplify and diversify other mundane knowledge tasks, reducing anticipated productivity benefits. Moreover, science is not a simple commodity good. Increasing the rate at which experiments are performed or papers are written does not inherently constitute an increase in scientific progress. A key aim of science is to develop new capacities for prediction or intervention, typically achieved by iterative trial and modification of theories or technologies. Theory is particularly useful to science-driven technological development because it permits “offline trial” of prospective interventions as a faster and cheaper alternative to live trial. However, it is unclear how AI will enable the pivotal aspect of theory development in science to be accomplished more rapidly or effectively.

One particularly salient area of research and development impacted by AI is manufacturing and materials science (MMS). Manufacturing researchers have applied ML-based tools for decades, but new methods and increasing computational capacity have led to a boom in recent years. Accordingly, MMS provides an excellent domain in which to investigate the evolving implications of AI in research and development. This paper reports results from 32 in-depth, semi-structured interviews with MMS researchers on their experiences with AI in engineering research. These researchers are all faculty, doctoral students, or recent doctoral graduates of the ten U.S. universities most productive of manufacturing AI journal and conference papers over the last five years. All are currently pursuing or have recently completed manufacturing research projects using AI. Interviews focused on participants’ experiences with AI in MMS research, including effects on knowledge production and dissemination, skill and resource requirements for research, career development.

Participants primarily reported using AI for data analysis and for construction of predictive tools. They were cautiously optimistic about effects of AI in MMS research, stating that it permitted them to work on problems which could not be practically addressed with other analytical approaches; to investigate phenomena affected by large numbers of variables; or to analyze data more quickly and efficiently than they could otherwise. Participants suggested that AI is most useful as a complement to modeling and analysis approaches based on longstanding physical theory—either in discerning data features for further investigation, constructing computationally efficient approximations to well-established but computationally expensive theory-based models, or empirically “filling in the gaps” in areas where current theoretical understanding is weak. A few respondents suggested that AI’s scope of application in materials science would narrow as explicit theory advanced, while others worried that overuse of AI could stymie development of fundamental theoretical understanding over the long run. Importantly, several participants noted that AI methods altered the form of research communication, transforming papers and articles into no more than an “extended abstract” for datasets and code.

Participants noted that AI, even when it yields computationally efficient models, requires large quantities of data and computation time to develop those models. Many participants repeated the research adage of “garbage in, garbage out,” indicating that ML tools have limited power to extrapolate beyond the datasets on which they are trained; and that large, high-quality, datasets, particularly drawn from real, proprietary production processes, are hard to acquire. Participants stated that use of AI requires additional skills compared to prior forms of research, but that it does not remove the need for any prior knowledge or skills. Although AI permitted them to perform projects which they otherwise could not, they did not indicate that AI or ML allowed them to conduct research more rapidly or reduced the amount of labor involved doing so (though some anticipated it might in the future). Perhaps most interestingly, some participants emphasized the vast disparity in computational resources available to universities and to private industry. Some argued that academic researchers had to find niches to remain relevant in an age of large-scale private sector AI. Others suggested that universities needed to do more (perhaps collaboratively) to provide large-scale computational resources on par with industry to their researchers.

These interviews offer an insightful complement and contrast to high-level policy narratives about AI, and, indeed, public-facing statements by science advocates about AI’s effects. Participants are, generally, most excited about AI as a modeling tool. Conventional theory permits trial of modified technologies (e.g., alloys) without real-world implementation and evaluation, speeding the search process for useful alternatives. ML extends this utility. ML can provide a more computationally efficient surrogate for preexisting theoretical models. In some cases, ML can substitute for explicit theory when no such theory is available. Accordingly, it seems plausible that AI will help to increase the rate at which technological opportunities can be identified and exploited, at least within existing paradigms. However, most interviewees were more skeptical that AI will increase the rate of progress in scientific theory itself. It is yet unclear whether AI can assist with theory development in poorly theorized areas—or whether it may hinder or replace explicit theory development. This study illustrates a need to more finely parse hopes for AI-driven accelerations in scientific progress, suggesting a promising case for technological acceleration within well-defined paradigms but a much more ambiguous picture for novel scientific theory.

09:15
Rise of Generative Artificial Intelligence in Science

ABSTRACT.  

The rapid advance of Generative Artificial Intelligence (GenAI) has garnered significant attention within the scientific community, heralding a potential paradigm shift in research methodologies and scholarly publishing (Charness et al. 2023; Lund et al. 2023). There is already a substantial uptake of GenAI tools among researchers, with many leveraging these technologies to brainstorm ideas and conduct research (Van Noorden & Perkel 2023).

However, the deployment of GenAI remains fraught with ethical and epistemic challenges. Generative models are prone to produce erroneous or fictitious outputs (“hallucinations”) (Jin et al., 2023). Moreover, the non-transparent nature of proprietary generative AI systems, exemplified by ChatGPT’s closed-source architecture, raises fundamental questions regarding intellectual property rights and ethical responsibilities in scientific research (Liverpool 2023).

The integration of GenAI into scientific research has sparked debate, raising fundamental questions about its potential benefits and drawbacks. This paper addresses three critical research questions to better understand the diffusion and impact of GenAI in scientific domains. The first question explores how GenAI is diffusing into the sciences. Understanding GenAI’s adoption within and across scientific fields offers insights into its scientific shaping role.

The second question is how GenAI influences teams in scientific research. Using GenAI tools like ChatGPT to assist with writing and other tasks has been associated with improved productivity (Noy & Zhang 2023). Conceivably, this might lead to reduced team sizes. Concerns have been raised about the potential of GenAI to replace jobs, including those traditionally held by human researchers (Kim 2023). However, what aspects of scientific authorship could be replaced by GenAI remains largely unexplored. Moreover, as scientific research becomes increasingly specialized and collaborative, with large teams often required to tackle complex problems across various disciplines (Venturini et al. 2024), it is also conceivable that GenAI might lead to expanded team sizes. In short, the impact of GenAI on team size and composition is still unclear. Understanding whether GenAI might reduce the need for large, diverse teams by automating certain roles, or conversely, necessitate even larger collaborations, is important for anticipating future research dynamics and human resource implications.

The third question is about the potential of GenAI to influence international collaborations. International collaboration is often associated with high-quality research outcomes and increased citation rates (Wang et al., 2024). However, geopolitical standoffs between major research performers pose new challenges to global scientific collaboration (Jia et al, 2024). In this context, examining how GenAI (which has risen contemporaneously with recent global tensions) might either bridge or exacerbate these divides is particularly timely. The intersection of GenAI with international collaboration offers a rich avenue for understanding the broader implications of GenAI in a rapidly changing global landscape.

To address these questions, this paper presents an exploratory bibliometric analysis of the rise of GenAI in scientific research. Using OpenAlex, which provides comprehensive scientific publication metadata (Priem et al. 2022), we analyze over 13,660 GenAI publications and 517,931 other AI publications to investigate the characteristics of GenAI compared to other AI technologies. We profile growth patterns, the diffusion of GenAI publications across fields of study and the geographical diffusion of scientific research on GenAI. We also investigate team size and international collaborations to explore whether GenAI, as an emerging scientific research area, shows different collaboration patterns compared to other AI technologies.

Our initial exploratory analysis reveals that the application of GenAI in scientific research has expanded well beyond its origins in computer science. While early developments in GenAI were predominantly concentrated within the computer science field, we now observe a broader diffusion of these technologies across a diverse range of scientific disciplines. This cross-disciplinary adoption suggests that GenAI is emerging as a general-purpose tool for enhancing research methodologies, accelerating discovery, and addressing scientific challenges in a variety of fields. Additionally, we find that the US has more rapidly adopted GenAI in science fields compared with China (through to 2023). China is highly productive in papers that use other AI methods, reflecting its high investment in established AI technologies. In contrast, the US, now with a lower publication output than China in other AI papers, has demonstrated a rapid shift in focus towards using GenAI in science. This may reflect the flexibility and dynamism of the research and innovation system in the US, supported by national AI initiatives and partnerships between government, academia, and industry aimed at maintaining global leadership in this critical area. While China also has long-term strategies for AI research and innovation leadership, the lag in China’s GenAI research output might indicate that its research institutions are still building expertise in this area.

Early evidence suggests that research teams focusing on GenAI tend to be relatively smaller compared to those working on other forms of AI. This reduction in team size may, in part, reflect the increasing productivity enabled by advances in GenAI tools and techniques. The ability to achieve significant outcomes with fewer collaborators may indicate that individual researchers or smaller teams are now able to manage and innovate more effectively, leveraging these powerful tools to generate substantial results with reduced effort and coordination. Despite the trend toward smaller team sizes, researchers continue to actively seek international collaboration. Even in the face of rising geopolitical tensions, the level of international cooperation in GenAI research remains on par with other AI fields, suggesting that the scientific community recognizes the value of global collaboration in advancing GenAI technologies.

GenAI is still at an early stage in its evolution, with its full implications for science still to unfold. Our study offers an exploratory and formative assessment of the current positioning of GenAI in science and hints at some of the patterns that appear to be emerging. Further opportunities and promising pathways for research on GenAI in science will be highlighted, along with consideration of implications of our findings for science and science policy.

(References omitted in this abstract)

08:30-10:00 Session 15C: Equity & Inclusion in Innovation
Chair:
Location: Room 225
08:30
Drug accessibility in the European Union: evidence from Supplementary Protection Certificates

ABSTRACT. This paper studies how intellectual property rights create incentives for pharmaceutical companies to bring new drugs to market. It focuses on a specific regulation governing the protection of pharmaceutical products in the European Union (EU). Here novel drugs are compensated with extra years of patent protection through a Supplementary Protection Certificate (SPC) if their development times are longer than five years. This type of extension exists also in the US where a firm has to approach the Food and Drug Administration (FDA) to consider their patents for a Patent Term Extension (PTE). Conversely, in the EU, the firm has to approach every EU member state’s regulatory body for the same task.

This paper exploits the heterogeneity of the SPC decision across EU countries, investigates the pharmaceutical firms’ decision to apply for SPCs in different EU markets, and, ultimately, whether SPCs generate greater accessibility of novel drugs across EU member states.

The invention and development of novel drugs differ from other inventions in a few particular ways — drugs require hefty fixed costs, involve clinical trials, and consequently take longer to develop (Scherer, 2010; Lakdawala, 2018). A novel drug can only come to the market when the product has been deemed safe for human consumption by the regulatory bodies of the respective countries. However, clinical trials combined with regulatory approval are time-consuming and have been increasing over time. For instance, the mean duration of drug development has climbed to about 10 years (DiMasi, 2014; Kyle, 2017). However, the patent clock starts as soon as the patent has been filed for the respective novel drug. It therefore leaves approximately 10 years or less of effective protection for a novel drug, in contemporary times. The longer a drug takes to reach the market, the shorter the patent protection would be and this is a disincentive to invest in novel drugs. Here is where SPCs come into play.

SPCs were introduced in 1992 and it came into force in January 1993 in the EU. New member states introduced SPC as and when they ascended into the EU. However, SPCs are not uniform across all member states. Patents covering drugs may be granted SPC in one member state while rejected in another (Mejer, 2017).

While SPC protects a drug by extending the patent(s) life involved in the drug, they are not the only way to achieve exclusivity. The longest exclusivity a drug receives is still provided using a patent (20 years), while the lowest duration of protection a drug can receive is through Market Protection (MP) and Data Protection (DP) (a total of 10 years). The exclusivity that the drug receives from the European Medicines Agency (EMA) is independent of the patent protection the drug receives through its underlying patents. If a drug receives protection that is lower than 15 years but more than or equal to 10 years, it is in between this, that an SPC becomes a viable option for a firm. On average, the effective protection period dropped from 15 to 13 between 1996 and 2016 (Copenhagen Economics, 2018). This duration implies that most pharmaceutical patents have to be protected by SPCs in order for them to receive protection that is more than 10 years.

Even though SPCs have existed in the EU for some time, we do not have a clear understanding of what the SPCs have incentivized, if any, in the EU. The only authoritative studies that we find in this regard are by Kyle (2017), Mejer (2017), and Copenhagen Economics (2018). One theme that emerges from all the studies is the following: there exists substantial heterogeneity among EU member states in the granting and refusal of SPCs. For example, Mejer (2017) studies 740 drugs that were authorized between 2004 and 2014 and the author finds 26 percent of the SPC applications associated with the drugs to be granted in one EU member state while being either rejected or withdrawn in others. This heterogeneity among EU member states seems puzzling. Why would a firm withdraw its SPC application in some geographical areas and not in others? Our paper is an attempt to disentangle this heterogeneity and consequently study the accessibility of novel drugs.

We rely on SPC refusal decisions from different EU member states’ regulatory bodies to identify outcome changes before and after the SPC refusals. We consider the decision to file a patent and SPC exogenous prior to the decision of a state to refuse or reject SPC applications exogenous. While EU member states have implemented SPCs at different points in time, such laws have generally come into force as a package of legislation during states’ ascension into the EU. This packaging confounds and it is difficult to disentangle legislations’ individual effects empirically. Whether the decision of a firm to apply for SPC is based on the availability of SPC in an EU member state, or because of the other laws that came because of their ascension is unknown. Our strategy of using SPC refusals within a state absorbs all the variations for that particular state, allaying the confounders.

An example to illustrate our strategy is the following: EPO patent 174726 is associated with SPCs in five EU member states Austria, Belgium, France, Switzerland, and the United Kingdom. The SPC was filed in the following order in different states by date: Belgium, the United Kingdom, Austria, Switzerland, and France. In some countries, such as Austria, France, and the United Kingdom, three SPCs were filed while in Switzerland, only one was filed. In France, all the related SPC applications were refused, while in Austria and the United Kingdom, one application was granted while the others were refused. We assume that firms cannot preempt these refusals prior to filing an SPC application in the respective country, and thus such SPC refusal decisions are considered shocks to a firm-country pair. Using this variation, we estimate the changes in the intensive and extensive margins of various measures of innovation.

08:45
Policy Implications of Skill Changes under Digital Automation: A Processual Approach with the Case of the Platform Economy

ABSTRACT. Industry 4.0 and 5.0, initiated by advanced information and communication technologies, platform algorithms, and other smart technologies, present significant challenges to skill formation and practice in the workplace. Such challenges are on the one hand comprehensive, as it is widespread across all types of tasks, work organizations, jobs, and sectors. On the other hand, they are complex, as smart technologies not simply replace old or generate new skills, but require subtle and nuanced human-machine interactions from different perspectives and in various degrees. Such comprehensiveness and complexity necessitate policy interventions to guide, protect, and encourage frontline service workers and their skill formation in new forms of work. However, the existing policy paradigm has difficulty in addressing these issues because it tends to hold linear and unrealistic assumptions of skill upgrading and overlook the situations, needs, and values of frontline service workers.

Informed by this requirement, this research proposes a processual approach to understanding skill changes driven by Industry 4.0 and 5.0 and establishing a robust foundation of skill-related policies. Following an inductive theorization strategy, the processual approach conceptualizes work as a series of events in which workers and/or technologies make judgments and take actions to move the process forward. With this generic conceptualization, the approach investigates whether and how (1) technologies trigger radical changes in the types, sequences, and numbers of events in work processes; (2) technologies engage with, shift, interrupt, and/or restrict the judgments or actions made by workers in each event; and (3) technologies transform the relations between judgments and actions in each event.

The processual approach has three advantages. Methodologically, it adopts generic conceptual tools applied inductively, avoids pre-assigned and hierarchical categorization of skills, and requires researchers to base their analysis on solid case-by-case empirical investigations, thus addressing the complexity of challenges imposed by the current automation. At the explanatory level, its generic conceptual framework makes it applicable to a wide spectrum of work, especially service work, thus addressing the comprehensiveness of such challenges. Also at the explanatory level, it adopts a symmetric view of the role of technological and social/organizational factors in skill changes, thus offering explanatory tools to seriously investigate the agencies and affordances of smart and powerful technologies.

This research applies the processual approach to the case of taxi-driving and ride-hailing, representative of the service automation initiated by the platform economy and supported by algorithms and AI. Rather than monotone replacement and generation of old and new skills, the skill changes from taxi-driving to ride-hailing emphasize repositioning and refocusing. For repositioning, workers’ spatiotemporal skills are marginalized but still relevant, emotional and communicative skills become centralized, and digital skills of speculating and anticipating algorithm judgments emerge. More importantly, regarding refocusing, the focus of drivers’ skills shifts from maintaining regular, suitable, and profitable work processes to addressing extra and unpredictable events generated by algorithm judgments. The repositioned and refocused skills have limited utility and transferability in enhancing ride-hailing drivers’ performance and job mobility. They are largely performed fragmentedly and passively due to the opacity and constant update of algorithms which put drivers in extensive information asymmetry.

The processual approach offers a generic but also empirical-based way to understand the changes in work and skills under the impact of Industry 4.0 and 5.0, thus offering a robust foundation of skill-related policies. The approach and the case study underscore the necessity for policy activities that first establish skill standards based on sector-by-sector, in-depth analysis of work practices and organizations with platform firms’ and works’ participation, and enforce such standards by having platform firms incorporate them in algorithmic rules. Second, it is important for policies to enhance the transparency of algorithms in the workplace by (1) having platform firms publicize principles of algorithmic rules and reducing frequencies of algorithm updates, and (2) conducting broadly defined—not-necessarily-technical—algorithm audits with workers’ participation. Third and in the long run, it is essential to promote social recognition of emotional, communicative, and digital skills in service work. No matter how repetitive and routinized, they are central to the everyday work practices of human-machine interaction required by Industry 5.0 and fundamental to the operation of any large-scale smart technological system.

The approach and findings of this research have broader public policy implications for Industry 4.0 and 5.0. They highlight the importance of deliberative policy processes and measures grounded in thorough investigations of marginalized targeted populations affected by emerging technologies. Such policy processes and measures should balance the goals of promoting technological and industrial advancement with ensuring social fairness and inclusiveness, and facilitate skill formations that ultimately benefit the operation of large-scale smart technological systems.

The data in this research is collected from three sources in the context of China. First, multiple rounds of participant observation and semi-structured interviews were conducted in Xi’an, China, from 2018 to 2023, involving over 250 conventional taxi drivers and ride-hailing drivers working for Didi, focusing on their everyday work practices, skill formation, and skill performance. During this period, the ride-hailing giant Didi has assumed a dominant position in the Chinese market and has obtained stable technological and business arrangements. Second, semi-structured interviews with 30 operation analysts and algorithm engineers were conducted in 2024, focusing on the principles and practices of algorithm design and operation. Third, documentary analyses are performed on online and media articles about ride-hailing drivers’ skills and city ride-hailing policies.

09:00
Gender bias in grant allocation shows a decline over time

ABSTRACT. Research question The issue of gender bias in research grant allocation remains on the agenda, as research findings differ over time, between qualitative and quantitative studies, and between small and large studies, and furthermore depend on the design of the studies and on covariates included in the analysis.

Data and methods In a recent project, we conducted eight case studies covering nine different funding instruments in six countries. Some studies are on the funding instrument level and others on the disciplinary panel level. This is an important difference, as grant evaluation and application ranking generally happen at the level of (disciplinary) panels, and the more aggregated studies at the research council level or the instrument suffer from disciplinary heterogeneity. Most of the cases studied are individual career grants, and some others are thematic grant programs. Not all gender differences can be called bias. If differences in grant success are based on differences in merit, in academic performance, these can be seen as legitimate. In that case there is no direct gender bias in grant allocation. Of course, the merit variables can be biased themselves, caused by processes external to the grant allocation process. In this project we focus on the question of direct gender bias, and only in a few cases the existence of indirect bias has been tested for. The correlational studies are complemented with other approaches that are better suited to identify causal relations, such as experiments, mediation analysis, and longitudinal studies. - Study 1 implemented a randomized controlled field experiment in a Spanish Regional Funding Organization. The causal analysis revealed no significant gender effect in grant evaluation, nor was there an interaction effect between the gender of the applicant and the gender of the reviewer. - Study 2 is a correlational analysis of gender bias in the Swedish Research Council SRC. - Study 3 is a correlational study of recent funding instruments of the SRC, the Science Foundation of Ireland (SFI) and the Austrian FWF. - Study 4 analyses the same SRC funding instrument, but now at the panel level, reducing heterogeneity which exists at the instrument level. - Study 5 is a field experiment comparing peer review models of a German funding organization (AvH), among others studying gender disparities in the models. - Study 6 tested whether gender bias occurred in the scores and the grant decisions of a funding instrument of Dutch NWO around 2003. The (non)significant effect of gender was not mediated by performance, suggesting also the absence of indirect bias. Looking predictive validity (do the granted applicants outperform the others in the later career) suggested that with hindsight several very good female applicants should have been funded. So, the lack of predictive validity did have a gender effect. - Study 7 examines the German Emmy Noether Fellowship, showing among others that gender had no significant effect on the grant decision, but age clearly had. - Study 8 replicates the Wenneras and Wold (1997) study claiming that women, to get a similar competence score needed to have an additional three Science or Nature papers. The replication still finds a significant gender effect on the competence score, but it is an order of magnitude smaller than in the original study. Analyzing gender bias in the decisions we found a non-significant advantage for men in getting grants.

Main findings The first finding is that gender bias in review scores not necessarily results in biased grant decisions. Grant decisions show more balanced pattern and less gender bias then the review scores, which is in line with other major studies done in the 2010s. This implies that at the decision-making level, bias in review scores seems to be (partly) corrected. A main question of our project was whether there has been a change in gender bias over time. To increase the empirical base, the results of some other case studies were added to the analysis. What was found is that over a period of several decades, gender bias in favor of men decline, and in the recent period there may even be a small advantage for women. In several cases we also tested for indirect gender bias, by using academic performance variables as mediators. However, generally we did not find such mediation effects. Further research should look for explanations. The found patterns may be the effect of gender equality policies at the funder organizations, but also the effect of the prominent position of gender issues on the public agenda.

Conclusion and discussion The findings suggest that direct gender bias in grant allocation is declining and maybe even disappearing. Initial evidence was found that indirect bias does not seem to be present. These positive trends should be monitored, as it is not guaranteed that these cannot be reversed. The research raises several methodological questions. (i) Some of the funding instruments – especially the thematic grants – have teams of applicants where the ‘gender’ of the applicant is difficult to determine. Different ‘gender-mixes’ occur and one needs to take that into account. (ii) Several of the analyses are probably suffering from heterogeneity, especially where the analyses are done at the level of a funding scheme that includes all fields. More panel level studies are needed (iii) The cases used in the analysis together have an N of about 8000, but several cases are relatively small implying that one can only detect large gender effects, and small gender effects may have been missed. Large-scale studies remain therefore relevant, especially if multi-level designs can be applied to include panel characteristics in the analyses. Data requirements are increasing: bigger and richer data are required and do exist, especially the research funding organizations may have a task here to make those data accessible to the scientific community. (v) Finally, it is needed to extend the set of variables measuring merit, and adequately define and operationalize those criteria that implicitly or explicitly play a role in grant evaluation.

08:30-10:00 Session 15D: Transition Policy
Chair:
Location: Room 331
08:30
Transforming STI policy for sociotechnical transitions: The OECD Agenda for Transformative STI Policies

ABSTRACT. Economies and societies need to transform to meet multiple challenges, including climate change, biodiversity loss, disruptive technologies, and growing inequalities. Science, technology and innovation (STI) can make essential contributions to these transformations, but governments may need to be more ambitious and act with greater urgency in their STI policies to meet these challenges. Sustained investments and greater directionality in research and innovation activities are needed, and these should coincide with a reappraisal of STI systems and STI policies to ensure they are “fit-for-purpose” to contribute to transformative change agendas.

The OECD Agenda for Transformative STI Policies (TrA) provides high-level guidance to STI policymakers to help them formulate and implement reforms that can accelerate and scale-up positive transformations in the face of mounting global challenges. The TrA was published at the OECD Committee for Scientific and Technological Policy (CSTP) meeting at Ministerial level in April 2024 (see https://doi.org/10.1787/ba2aaf7b-en). It was a prominent component of the meeting and its main messages were subsequently incorporated into the meeting’s “Declaration on Transformative Science, Technology and Innovation Policies for a Sustainable and Inclusive Future”, which was signed by 44 countries and the European Union.

The TrA proposes three transformative goals for STI to pursue: (i) Advance sustainability transitions that mitigate and adapt to a legacy of unsustainable development; (ii) Promote inclusive socio-economic renewal that emphasises representation, diversity and equity; and (iii) Foster resilience and security against potential risks and uncertainties.

There are synergies and trade-offs between these transformative goals, particularly in the context of ongoing political debates that sometimes pitch economic competitiveness goals against sustainability transitions and energy security, for example. And there are likely multiple pathways for reorienting STI policies and systems to meet these goals. The TrA outlines a common set of STI ‘policy orientations’ for governments to implement to help drive transformative change, namely the need to: • Direct STI policy to accelerate transformative change • Embrace values in STI policies that align with achieving the transformative goals • Accelerate both the emergence and diffusion of innovations for transformative change • Promote the phase out of technologies and related practices that contribute to global problems • Implement systemic and co-ordinated STI policy responses to global challenges • Instil greater agility and experimentation in STI policy

Many of the necessary reforms are familiar to the STI policy community, but barriers remain, for example, in scaling-up and institutionalising policy innovations. Moreover, transformative change is often associated with radical reforms, but small incremental changes may cause a system to shift qualitatively where it is close to a tipping point. This perspective lies at the heart of the TrA and acknowledges that bringing about a fundamental transformational change in STI will require changes across many fronts, adapting as lessons are learnt on what does and does not work.

Accordingly, in translating these policy orientations into concrete actions, the TrA provides high-level guidance for ten STI policy areas where there are opportunities to facilitate the transformation of STI and STI policy systems. These policy areas cover all aspects of STI policy and governance, including the following issues: 1. How to direct public STI funding and private financing to support transformative change? 2. How to gear research and technology infrastructures towards transformations? 3. How to leverage enabling technologies to advance transformations? 4. How to nurture the skills and capabilities required for STI-enabled transformation? 5. How to ensure structural and market conditions are conducive to transformation? 6. How to develop and use strategic intelligence to guide transformation? 7. How to engage society in STI to further transformative change? 8. How to deepen STI co-operation between innovation system actors for transformation? 9. How to promote cross-government coherence to help coordinate STI-enabled transformations? 10. How to leverage international STI co-ordination to support transformation for the public good?

To complement this high-level guidance, the OECD is also developing policy toolkits, supporting peer learning on specific policy challenges, and providing country-specific support services. These activities are being mainstreamed and embedded across the OECD’s STI work programme where they can be co-produced with policymakers and experts.

The presentation will (i) outline the TrA, its rationales and its positioning vis-à-vis other OECD STI activities; and (ii) highlight some of the challenges in formulating and rolling out the TrA, particularly with regards to its wide breadth, the tensions and trade-offs between some of its elements, and the complexity, uncertainty and long-term nature of transformations. The presentation will also describe some of the policy guidance and toolkits currently under development and the results of their testing in country settings.

*Current role: The authors are policy analysts working in the OECD who developed the TrA and are responsible for its promotion and rollout.

08:45
Exploring an innovation policy for public AI – Ration-ales, examples and learnings

ABSTRACT. Background and research questions Since the launch of ChatGPT in 2022, artificial intelligence (AI) has been widely adopted across private, business, and public sectors. As a general-purpose technology, it drives societal and economic transformation, enabling resource efficiency, workforce support in aging societies, and sustainable systems like circular economies and smart energy grids. However, AI raises concerns about social inequality, access to infrastructure, and fair competition. Geopolitically, dependence on foreign AI infrastructures highlights the need for technological sovereignty. To address these challenges, stronger state involvement in regulating and providing AI infrastructures is discussed. However, acknowledging the social and political importance of AI does not answer the strategic and operational questions of how the public sector needs to be part in the provision and application of AI infrastructures. The design and implementation of useful public AI infrastructures represent a research case, for which experiences already exist, but which has not yet been discussed structurally. By raising the importance of public AI, we aim at initiating the required research and debate on an innovation policy mix for public AI, funded by the Mozilla Foundation. We aim to answer the following research questions: • What are the definitions, delineations, and rationales for public AI? • What kinds of public AI exist already? • What is the impact of Public AI on research and innovation? • How can innovation policy-making support public AI?

Methodology Our methodology is based on a review of the literature on digital innovation, AI compo-nents and the dimensions of public AI. To analyze existing public AI activities and related policies, we apply an explorative qualitative case study research design using two data sources: First, we use our definition of public AI to search publicly available information and documents on relevant examples of public AI in Europe and the US. Second, we conduct semi-structured interviews with experts on developing of or policymaking for public AI. Both data sources are analyzed via a qualitative coding process aiming at ex-ploring the different types of public AI in practice and how different policies and involve-ments of the public are realized. By analyzing successful applications and challenges, we generate an overview of possible implementations of public AI for strengthening re-search and innovation and on the options for policy maker to support such moves. Conceptual literature review Based on a review of the literature, three dimensions of public AI are important: Public AI needs to be trustworthy, prioritize social goals over profit maximization, and encom-passes collective decision-making processes. Therefore, we define public AI as forms of AI which are trustworthy, meaning they fulfill the conditions of privacy, fairness, trust, safety, and transparency, aim to create additional value to society as a whole – that is, they are not primarily driven by profit motives – and whose inputs (compute, algorithms, data, human resources) and access are at least partially governed, regulated, or sup-plied as public or common goods. The three dimensions are used as a filter to identify examples of public AI infrastructures or applications and to clarify how innovation policy instruments can provide directionality to the creation of public AI. Additionally, the usage of different combinations of these di-mensions allows for the categorization of conceivable variations of public AI. While each dimension can characterize different versions of AI applications and usage on its own, public AI exists only where aspects of all three meet. The societal and market dynamics of AI necessitate state involvement in Public AI, ad-dressing challenges like market concentration and transparency. Public AI counters mo-nopolistic tendencies in AI markets dominated by tech giants, promoting fair competition and societal welfare. It supports non-commercial developers and underfunded actors, fostering experimentation and innovation. Public AI can drive solutions to societal challenges, including ecological transitions, by optimizing resource use while balancing AI’s high energy consumption. Geopolitically, it strengthens technological sovereignty and societal values, fostering collaboration be-tween regions like the US and EU. International cooperation on public AI and agree-ments akin to nuclear non-proliferation may be necessary to regulate and mitigate risks from problematic AI applications while ensuring its use for the common good. Preliminary results Public AI is already being implemented through various initiatives. Estonia's bürokratt, a public-private AI system, enables 24/7 state communication for tasks like applying for child benefits or alerting households during military exercises. It exemplifies public AI by prioritizing open development and societal goals. Similarly, the EU’s GAIA-X project ad-dresses data security concerns associated with private cloud services like AWS, offering a trustworthy public alternative for secure AI development. Meanwhile, Mozilla’s Com-mon Voice project provides open datasets for AI voice training, funded by donations. These initiatives highlight diverse approaches to public AI, blending innovation, public utility, and trustworthy data use. The role of the state is critical in shaping public AI by creating conditions for its growth and integration into the AI ecosystem. Governments must guide AI applications strategi-cally, predicting potential risks and prioritizing public AI development. Clear policy sig-nals, funding for R&D, and infrastructure projects like GAIA-X highlight this direction. The state can also promote public AI through preferential procurement practices and regula-tions that ensure a level playing field and encourage competition. Addressing labor con-ditions is essential, as fair treatment of the AI workforce reflects societal values and shapes job profiles in an AI-driven economy. Additionally, the state must facilitate knowledge exchange to support the integration of AI into jobs and personal applications. These actions require a policy mix combining financial, regulatory, and informational tools. Effective orchestration ensures public AI supports innovation while addressing so-cial challenges and promoting equitable AI development. After finalizing the data collection and analysis, we aim for an overview on options for public AI applications as well as a discussion of different policy instruments to realize them.

09:00
Artificial Intelligence's Past as Prologue: a (re-)geopoliticization of technology

ABSTRACT. What can AI’s policy past teach us about its future? How can states increase capacity for regulatory impact on AI technology?

Our current moment is not the first time artificial intelligence (AI) technology has been promised, funded, hyped, and feared. This paper explores a parallel episode in 1984 as the Reagan administration was intervening in United States’ antitrust architecture to facilitate the computing industry’s research and development and while US military’s DARPA was actively pursuing AI development through their Strategic Computing Initiative. Using an empirical archive of policy documents, media coverage, and US Congressional debates, I explore the ‘specter’ of Japan’s Ministry of International Trade and Industry (MITI) and the fear in the US that the Japanese ‘Fifth Generation Project’ would leapfrog American technologies. Despite the lack of expert scientific consensus, once the idea was seeded that Japan’s AI technology was a threat, the fear and hype became a political tool used by a variety of actors, to advance agendas and justify decisions, the consequences of which we see today.

This paper puts this political history in conversation with the technological narrative. While algorithmic advances made in the 1980s are still in use, breakthroughs were achieved when these were combined with centralized computational power and vast human-generated data. AI’s capabilities are only possible through a small number of the largest corporations to have ever existed, reinforcing monopolistic winner-take-all dynamics. Most AI we have today is only possible through the infrastructural capacity of a small number of Big Tech companies, a significant change from the AI envisioned in previous iterations.

This combination of AI technology and monopoly platforms has underscored two related political struggles. Globally, and locally, we see a new energy for pushing back against technology, including across unlikely coalitions and in various legal and regulatory domains, with calls to ‘break up Big Tech,’ what some have referred to as the “techlash” – growing wariness and opposition to algorithmic technologies and the corporations that facilitate them. At the same time, states are (re-)invoking AI technology and its components in geopolitical struggles for dominance, particularly between the US and China. Big technology is repeating as a site of nationalistic politics for global dominance.

On the one hand, we see a re-politicization of technology and rising desire for a change in dynamics between state v. platform. Simultaneously, new sites of geopoliticization of technology in state v. state conflicts are emerging with questions over technological control and capacity at the center. To understand this current moment, this paper revisits a parallel episode in 1984, when the central debate was of the political economy question: state v. market. The critical juncture bears striking resonance to today: AI research and its emerging industry was similarly experiencing a rollercoaster cycle of boom and bust; the US military was actively pursuing AI technology through its military research agency, DARPA; antitrust law was in political play; and there was significant talk of the ‘specter’ of Japan’s Ministry of International Trade and Industry (MITI), invoking a fear in the US that the Japanese ‘Fifth Generation Project’ would leapfrog American technologies. Despite the lack of expert scientific consensus, once the idea was seeded that Japan’s AI technology was a threat, the fear became a political tool used by a variety of actors, to advance agendas and justify decisions, setting us on a path to the political economy we have today. At the same time, shifting loyalties among US politicians and parties saw the rise of so-called “Atari Democrats,” while opposition to ‘high tech’ came largely from conservatives. Just as a politics of fear mobilized political action, a politics of hype was likewise a necessary component of bringing into being the conditions under which later technological and corporate power innovation was possible.

Why do states today lack capacity to regulate AI and the corporations which facilitate them, either through effective policy, antitrust or antimonopoly, legislative, or judicial strategies? This, despite growing political will to do so and despite recentralization of technology as a site for geopolitical dominance. To answer this question, this paper returns to the critical juncture of 1984. My three-fold argument is that: a) Political decisions subsequently enabled the infrastructural centralization that makes AI feasible; even while political actors could not foresee or imagine the necessary centralization because the AI technology that was being hyped and promised at the time differed in a critical way from what we have today. b) Unlike earlier infrastructure monopolies, the assemblage of technologies which make AI feasible—particularly algorithmic code, centralized cloud computing, and the vast volumes of data—are also the same technologies which enable much state capacity to carry out its functions, meaning states today are too deeply dependent and interwoven with the same critical technologies, regulatory disruption that challenges AI infrastructure would also impact state function. c) Building on the work of path dependency scholars, at a crucial moment in the 1980s, the US state diminished its own capacity for regulatory power, in service to the ideals of market fundamentalist reforms even as the same state apparatus was deeply invested—financially as well as emotionally—in achieving the technological feat of AI.

AI technology has always been tied up with goals of state capacity, slotted into existing geopolitical and ideological conflicts. Understanding this dynamic, particularly in how it interacts with infrastructural capacity of the base technologies which make AI possible, is critical to formulating effective policy interventions today – both for harnessing the innovation benefits of AI and reigning in its destabilizing impacts.

08:30-10:00 Session 15E: Risk and Governance of Emerging Technologies
Location: Room 235
08:30
Addressing the Paradoxical Nature of Emerging Technologies in Transformative Policies

ABSTRACT. Background Governments struggle to deal with technological changes in ways that promote socioeconomic gains while preventing harm. However, developing policies for emerging technologies is complicated as they can simultaneously be tools to solve problems and generate new societal challenges. This duality reveals a paradox in emerging technologies. Embracing this paradox requires multi-faceted interventions to address uncertainty and the co-evolutionary dynamics between policy, technology, and society (Edmondson et al., 2019; Haddad et al., 2022; Pfotenhauer & Jasanoff, 2017).

Prior work has considered technological change as an exogenous element in policymaking (Edmondson et al., 2019; Rogge & Reichardt, 2016). Thus, existing frameworks seek to cope with socio-technical change by minimizing contradictions and conflicts in policy mixes (Edmondson et al., 2019; Forster & Stokke, 1999; Haddad et al., 2022). We depart from prior literature by analyzing contradictions within emerging technologies policy mixes, highlighting how maximizing coherence and consistency may not always be possible or desirable.

Drawing from paradox theory (Brunswicker & Schecter, 2019; Smith & Lewis, 2011), transformative innovation (Haddad et al., 2022; Schot & Steinmueller, 2018), and policy mixes (Edmondson et al., 2019; Rogge & Reichardt, 2016), we develop a framework that illustrates how policies interact with emerging technologies. We contribute by extending frameworks for policy mixes by acknowledging the paradoxical nature of emerging technologies and providing insights about the mechanisms policymakers use to navigate the lack of coherence and consistency within and across policies.

Theoretical Framework Emerging technologies can simultaneously be a tool to solve societal problems and generate new societal challenges. Consider, for example, artificial intelligence (AI). As a tool, AI can shorten medical imaging times, reduce energy consumption, and increase efficiency in medical care (Doo et al., 2024). Stakeholders in healthcare can “pull” on the development of AI, influencing policies related to both AI and healthcare. Conversely, using AI in healthcare raises new challenges, such as data privacy issues, biases in algorithm training, and ethical considerations (Acemoglu, 2021; Bottomley & Thaldar, 2023). Challenges will mobilize stakeholders to ”push” for policies to deal with them, such as new regulations.

As a result of the pull and push forces, dedicated policies are created for emerging technologies, which become part of and interact with other related policies. Policymakers must coordinate between different policy mixes to address the tool/challenge tension of emerging technologies (Jarzabkowski et al., 2022; Stone, 2011). Following our example, more than 60 countries have enacted AI policies (OECD.AI, 2021) comprised of dedicated instruments (e.g., scientific funding, law reforms) and links to related policies (e.g., privacy regulation and healthcare strategies).

Elements within policy mixes are not necessarily coherent or consistent (Rogge & Reichardt, 2016). For example, AI strategies often have regulations to prevent use or increase the cost of high-risk applications alongside funding and incentives in the same areas (e.g., healthcare). The coexistence of contradictory elements within the policy mix is not necessarily something policymakers want to eliminate; it can result from a purposefully crafted strategy to deal with a complex problem with conflicting stakeholders.

When policymakers “walk through” paradoxical tensions, they confront them through iterative responses of splitting and integration, which can bring short-term performance alongside long-term sustainability (Smith & Lewis, 2011). For example, policymakers use policy strategies such as discursive techniques, shifting temporal priorities, etc. (Jarzabkowski et al., 2022; Smith & Lewis, 2011; Stone, 2011). The latter is only possible when policymakers have a “paradox mindset” that allows them to strive amid tensions (Miron-Spektor, Ingram, Keller, Smith, & Lewis, 2018). This dynamic coordination of the policy mix system will provide directionality to the technology and, in turn, influence its evolution. For example, the Chilean AI strategy is firmly committed to using AI for climate change through initiatives such as using AI to prevent environmental law violations (MinCTCI, 2021). Its updated version has more specific actions, such as creating focused, dedicated research funding and strengthening data relevant to using AI to fight climate change (MinCTCI, 2024). However, it is ambiguous when dealing with AI’s impact on the environment, highlighting its relevance but not having a strong push that could generate amity in the private sector.

Discussion Acknowledging the paradoxical nature of emerging technologies is crucial for policymakers. It helps them to navigate tensions and contradictions in policy mixes rather than striving for coherence. Further, embracing emergent technologies’ tool/challenge duality will influence technology policy to favor their priorities. Socio-technical systems reflect social and environmental needs (Diercks et al., 2019; Schot & Steinmueller, 2018), influencing expectations about technologies’ role in reaching desired futures (Fagerberg, 2018). In our framework, technological change is not “outside” the policy mix, as in prior work, becoming the core of a system of policies and strategies.

How policymakers navigate the system's paradoxes will influence how technology evolves. The dynamic coordination of policy mixes influences elements across multiple dimensions, such as institutional arrangements, funding instruments, public opinion, and political priorities. The resulting assemblage will drive technological evolution, influencing how it varies and which technologies and uses are acceptable (Grodal et al., 2023).

Policymakers do not necessarily strive for coherence and consistency, and how they design and implement mechanisms to generate a dynamic equilibrium of the paradoxical system is not trivial. For example, using experimental approaches to deal with uncertainty and conflicting interests (e.g., living laboratories and sandboxes) requires adequate institutional arrangements, resources, and skills often unavailable in the public sector. Other mechanisms, such as discursive techniques to reconcile conflicting stakeholders and technological paths, require policymakers to have a “paradox mindset” and political abilities to craft the right discourses for different conflicting elements. Moreover, managing short- and long-term horizons is not always possible when changing government coalitions makes policy continuity challenging.

We provide policy implications for crafting policy mixes that acknowledge the paradoxical nature of technologies. Our work can support how governments prepare themselves with the right assets to develop emerging technology policies. Moreover, our work serves as an analytical lens for policymakers to analyze the policy ecosystem and consciously strategize how to navigate paradoxes to maximize socioeconomic benefits and prevent harm from technological change.

08:45
Technology readiness level mapping as a basis for governance of emerging technologies

ABSTRACT. The novelty, fast growth and high impact that define emerging technologies demand the attention of society and so of policy makers (Rotolo, Hicks & Martin, 2015). Yet the uncertainty and ambiguity also characteristic of emerging technologies challenge the formation of appropriate sociotechnical governance responses (Collingridge, 1980), not least because many governance options are available (Borrás & Edler, 2020). In the late 2010s, the seemingly imminent introduction of autonomous vehicles (AV) onto public roads created considerable uncertainty within well-established patterns of governance of state transportation systems. To prepare for something whose final form and arrival time were unknown, states needed to understand how this technology would affect the efficient movement of people and freight and when AVs would appear on their roads. To meet this challenge, politicians and agencies in many states commissioned advice. The advisors drew from and/or participated in the then widespread industry, media and academic conversation around this emerging technology. Prominent in this conversation were predictions of the arrival time of autonomous vehicles. The public might be interested if this were more than a theoretical possibility. To guide their decision making, investors also wanted to know how close the technology was to being realized. Entrepreneurs who needed investments were happy to oblige by predicting their launch of automotive autopilots. In a crowded space of voices competing for attention, optimism reigned supreme. This oft repeated dynamic has been formalized in the Gartner Hype Cycle, a diagram released every year on which AVs appeared between 2013 and 2017. The focus on predicting arrival time was the approach adopted in the reports states commissioned to procure advice. In this paper we argue that such predictions, precisely because they are motivated by commercial considerations cannot be relied upon. Instead we leverage an approach used by companies and government agencies managing innovation that we suggest could provide a better basis for assessing emerging technologies and developing appropriate policy responses. We argue that to narrow down policy options, decision makers should examine where the technology is now. Hypothesizing that earlier and later stages of technology development require different policy treatments, we operationalize this idea by creating a mapping of technology readiness levels (TRL) to policy options. We examine the case of US state policymaking for autonomous vehicles, specifically the recommendations made by policy advisors and map them to TRL levels. TRLs NASA invented the TRL framework in the 1970s as a tool to aid in managing development projects (Mankins, 1995). TRLs offer a way to grade a development project on how much progress has been made and how much remains to be done. Companies incorporate TRL assessments into periodic project reviews to decide whether funding should continue or end (Olechowski, Eppinger & Joglekar, 2015). TRLs have become something of a lingua franca among engineers and have been used in European funding programs (Olechowski, Eppinger & Joglekar, 2015). Methods Our analysis began with a search for reports on autonomous vehicles commissioned by states. Our search terms combined the state name, report, DOT, AV, autonomous vehicle, driverless, and CAV. Requiring all the terms did not work, but interchanging them brought good results. Terms such as DOT followed by state name, CAV, and "report" were the most effective. Of course, our method suffered the same challenges relevant to all work focusing on the grey literature. Namely, we likely missed some reports because we depend on reports being posted online (Grimmer, Roberts, and Stewart, 2022) and are also vulnerable to the "found data" problem - not knowing what the universe of reports looks like and assuming the found reports are representative. We found 75 PDFs of state AV reports and extracted their 791 recommendations. We then inductively coded each recommendation into one or more of 25 categories: connected, consultant, data collection, data-privacy, definitions, DoT, drivers, education, fund, get out of the way, industry, infrastructure, insurance-legalization, insurance-liability test, legislation, misc, monitor-research, partner, pilot, plan, platooning, road maintenance, task force, test. Findings To align recommendations with the TRL framework, we first simplified the 9 TRL levels into 4: research, technology development, testing, and deployment. We then classified each recommendation as appropriate for one of the four levels. Table 1 shows the four levels and their correspondence to TRLs as well as the alignment between recommendation topic categories and the four levels. [not possible to display table in this format] The thought behind this is that in the early stages of an emerging technology, when laboratory research is the focus, appropriate policy responses are watchful, monitoring progress and commissioning research on possible implications as well as doing nothing and ensuring that those under your jurisdiction also do nothing. As the innovation emerges into the development phase, when prototypes might be displayed, it may be time to commission a task force to assess the situation and develop recommendations, define the issues at stake and educate the public. When the technology is ready for real world testing, you might want to attract pilot projects to position your jurisdiction at the forefront, foster economic development and inform your public about the future potential. Finally, when the technology is deployed, i.e. available for purchase, you need to be ready, in this case with rules on driver licenses, insurance arrangements, data privacy restrictions and even changes in road maintenance. When looking at the number of recommendations by category over time, we find a pattern predicted by the hype cycle, Figure 2. That is, the earliest reports contain the largest share of recommendations at TRLPM 4, almost half. After that, recommendations become more conservative, with a larger share of TRLPM 1 recommendations decreasing over time as the share of level 3 increases. This suggests that the TRL mapping is a productive way of analyzing policy for emerging technology. In addition, TRL assessment is a tool that policy makers can use as a basis for policy making for emerging technologies.

09:00
Impact of Government Facial Recognition Technology Sourcing on Facial Data Sharing: An Experimental Study in Digital Tax Services

ABSTRACT. Since 2021, US taxpayers have been required to undergo enhanced identity verification through facial recognition technology (FRT) with the Internal Revenue Service (IRS). The initial step of this FRT-enabled verification asks individuals to take and upload a selfie and a copy of personal identification documents to set up taxpayers' online tax accounts through an FRT system offered by ID.me, a commercial company specializing in FRT verification.

The collaboration between the IRS and ID.me aimed to reduce fraudulent accounts and enhance taxpayers' data security (Collier, 2022 ). However, skepticism arises about the IRS outsourcing facial recognition services to a profit-driven tech firm, especially regarding its data processing, use, and protection practices (Metz, 2022 ). In February 2022, some US Congress members voiced concerns about ID.me (Alms, 2022). Moreover, integrating FRT into online tax services raises fears of perpetuating systemic bias and discrimination. Concerns over FRT's racial bias and its impact on social justice have been raised. Recent studies indicate higher error rates for identifying people of color through advanced FRT algorithms (Buolamwini, 2022 ). Facing such firestorms, the IRS officially announced the drop in FRT in February 2022 (IRS , 2022) and later announced the transition to a government-procured FRT system, Login.gov (Heckman, 2022 ). However, as of May 2024, ID.me remains the sole login means for anyone to access IRS online accounts despite concerns and proposed alternatives (Riley, 2023).

While governments have been adopting various new AI technologies in operations and delivering public services over recent years, public perceptions of government AI adoption, like FRT, play a crucial role in enhancing democratic accountability, public oversight, and digital transformations in public services (Brewer et al., 2023; Schiff et al., 2023). With an increasing focus on comprehending public perceptions of AI, recent studies explore perceptions of human and automation interactions in program participation (Miller et al., 2022), attitudes toward automated decision-making (Miller & Keiser, 2021), perceptions of justice regarding human and AI decision-making in school appointments (Alon-Barkat & Busuioc, 2023), and citizens' views on fairness and acceptance of rule-driven versus algorithmic decision-making (Wang et al., 2023). Nevertheless, little public administration and policy work has explored public attitudes about governments contracting biometric functions for online public services.

Governments contracting out biometric services have a long history in the US. FRT for border control and security between governments and the biometric industry has been an established practice (Norval & Prasopoulou, 2017). However, FRT in everyday digital government services, like the case in providing digital tax services, is still arising. It is unknown how FRT and other biometrics influence public service provisions and how the trend continues to unfold in the future. Ni and Bretschneider (2007) argue that contracting citizen-based information systems that involve critical government information and personal records, including tax and service data, requires special attention to privacy and security. Since the agents to whom individuals give their facial information may change, how the public or taxpayers adjust their risk perceptions and express their willingness is of both theoretical and practical interest.

Prior research on data sharing tends to agree that individual's willingness to share personal information depends upon expected benefits, types of information being shared, and agents who collect, manage, and use their data in specific contexts (Cheong & Nyaupane, 2022; Degli Esposti, 2014; Mossberger et al., 2023). While individuals expect security and efficiency in the digital tax service environment, to what extent are they, in fact, willing to share facial information, considered the most sensitive type of information, with governments? How is this affected by governments choosing a third party to collect facial data and the potential of governments opting for different sourcing structures? Understanding and addressing individuals' willingness to share facial data in response to various parties providing FRT services for digital tax not only promotes American democracy but also encourages active public participation and advances data security and privacy practices in digital public services for better policymaking.

Therefore, this study delves into the public's willingness to share personal data with governments through FRT. By being informed by the literature on data sharing and the "publicness" of organizations and relying on the contextual integrity framework, this study conducts a vignette experiment to answer the following research question: how does the willingness of individuals to share facial data with government agencies vary by government FRT sourcing structures? The findings contribute to emerging AI and biometric literature on the responsible use of FRT and inform ethical policy for government contracting new tech services.

08:30-10:00 Session 15F: Open Science
Location: Room 222
08:30
Differentiating Data Reuse in Scientific Publications

ABSTRACT. Background: Demonstrating and improving FAIR principles in research data remains challenging due to limited methods for gathering reliable success measures, especially in identifying and measuring dataset reuse and investigating underlying mechanisms. To address this challenge, new methods are needed to systematically differentiate dataset reuse from original use. Current methods for measuring dataset reuse primarily rely on repository usage tracking, which counts views and downloads on dataset web pages.[1] While these statistics offer insights into dataset impact, their use is limited due to challenges in differentiating types of use, investigating downstream impact, and reliably measuring use due to analytics gaming and automated internet activities.

Rationale: A more reliable measure of research dataset reuse can be achieved through dataset citations in scientific research publications. The mention of a dataset in a publication implies that the dataset was reviewed, analyzed, and potentially reused by the author(s), going beyond viewing or downloading. The challenges here are two-fold: first, citing datasets is not yet a standard practice in science, and reference sections of scientific publications do not reliably cover all dataset citations, with many datasets being cited “informally”[2]; second, some mentions of datasets in scientific publications are done by the dataset producers, or those researchers who developed the dataset and publication as part of one research program, and these instances need to be excluded from measures of dataset reuse. Given the needs and associated challenges, our team pursued the identification of research dataset mentions in scientific publications and the subsequent differentiation of publications resulting from grant awardees who developed datasets as part of their research (dataset producers), and other researchers who have reused those datasets (dataset “re-users”) for subsequent analysis of factors that relate to successful dataset reuse.

Research Questions: This project supports FAIR principles in research data by leveraging scientific publication data and metadata to understand biases not evident in repository data alone and provide evidence of dataset reuse and influencing factors. The project aims to address key questions that cannot be answered through repository data alone, including questions around (1) the extent to which dataset mentions and citations can be disambiguated by user (data producer or “re-user”); (2) the researchers who cite data and their characteristics (impact networks); (3) the differences between data that is reused and data that is not; and (4) the differences in practices around datasets that are reused and those that are not (including differences based on discipline, geography, funder, and dataset type).

Methods: To address the first of the two challenges mentioned above (informal data citations in publications), the project team searched for a set of AIDS/HIV datasets in the full text of a subset of Scopus-indexed peer-reviewed publications using machine-learning models. The team started with models that were previously developed in a Kaggle competition and subsequently used as part of the Democratizing Data pilot projects and NIH Generalist Repository Ecosystem Initiative. [3] The models were deployed to identify if these datasets were mentioned in publication full text, and an additional model was used to search through references. To address the second challenge (differentiating dataset citations by data producers and “re-users”), the team is testing methods to differentiate between dataset producers and dataset re-users through a combination of approaches, focusing on elements such as overlap between study PI and publication author(s), overlap between study affiliation/location correlation and publication affiliation(s), mention context, funding acknowledgments and identifiers, data availability statements, and other textual content.

Results: Preliminary results comparing study/dataset metadata and publication metadata associated with Longitudinal Studies of HIV-Associated Lung Infections and Complications (Lung HIV) show that greater overlap between study location metadata on NIH-funded clinical trial datasets and affiliation metadata on scientific publications is indicative of those publications resulting from the dataset producer, while less overlap indicates the opposite. Here, the overlap was calculated based on string-matching text on location/affiliation data between the dataset metadata and publication metadata, and results were validated manually through reading publication text surrounding the dataset mention(s) in each article. These results support our hypothesis that overlap between metadata elements on affiliations indicate that the publications are likely to have stemmed from the original study/researchers. We hypothesize that the same will be true of metadata on funding identifiers, researchers, and publication context; for this reason, we anticipate continued work linking dataset metadata to publication metadata for those publications that mention them will be able to help reliably differentiate dataset producer publications from dataset re-user publications.

Next Steps and Conclusion: In addition to continuing the work mentioned above, next steps include investigating the drivers and barriers to data use/reuse by conducting a comparative analysis of owner-disclosed dataset information across reused, used, and unused datasets. Dataset characteristics, such as DOI presence, data type, granularity, delivery format, documentation quality, and linkage to publications, will be analyzed to understand factors promoting downstream data use/reuse. The hypothesis is that dataset quality and disclosure will be linked to greater downstream data use/reuse. Overall, this research is advancing FAIR principles in the research data ecosystem by developing robust methodologies for distinguishing between data use and reuse. This will enable insights into variables that influence data reuse, allow the identification of impactful researchers both as data sharers and data “re-users”, and ultimately incentivize data contributions toward a more open scientific community.

Acknowledgements: Research reported in this publication was supported by the Office of Data Science Strategy of the National Institutes of Health under award number 1OT2DB000002. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

References: [1] Fenner, M., Lowenberg, D., ... & Chodacki, J. (2018). Code of practice for research data usage metrics release 1. https://doi.org/10.5281/zenodo.1470551 [2] Irrera, O., Mannocci, A., Manghi, P., & Silvello, G. (2023, September). Tracing data footprints: Formal and informal data citations in the scientific literature. In International Conference on Theory and Practice of Digital Libraries (pp. 79-92). Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-43849-3_7 [3] New York University, Elsevier, & Johns Hopkins University (n.d.). Democratizing Data Search and Discovery Platform User Guide. https://democratizingdata.gitbook.io/userguide

08:45
Competition or Diversion? Effect of Public Sharing of Data on Research Productivity of Data Provider

ABSTRACT. Sharing research data is crucial for advancing scientific progress, and various institutional efforts have supported this endeavor. However, scientists often hesitate to make their data publicly available due to concerns about the potential negative impact on their research productivity. They fear that sharing data may enable competitors to address similar research problems, thus intensifying competition and limiting the exclusive publication opportunities for the data providers. Although the existence of this “competition” effect is acknowledged, literature on scientists’ strategic choice of research problems within priority-based scientific reward systems, their competitive behavior in resource sharing, and reasons for the scientists to seek others’ resources for research raise theoretical ambiguity regarding these concerns. Data providers have a time advantage in publishing their findings before data recipients because they have a head start using the data for research. Consequently, data recipients may consider competing with data providers for the same publication opportunities risky, and they may be motivated to pursue different research inquiries or those who can use the data to address different research inquiries are willing to use the data in the first place (referred to as “diversion”). The probable presence of the “diversion” effect prompts us to ask an empirical question about how public sharing of research data affects the research productivity of data providers in practice. Despite the importance of this question for improving institutional designs that support scientists’ sustainable data-sharing practices, existing research provides limited clues. For instance, studies have examined scientists’ internal motivations for resource-sharing, their resource-sharing practices, and whether scientists gain benefits from public data sharing, such as increases in citations to their research works. Although these studies contribute to understanding what motivates scientists to share their research data by shedding light on the “benefit” side of doing so, their salient concern in sharing their data—the possibility of losing publishing opportunities—has remained underexplored. This research dearth may be due to the empirical challenges in identifying the causal impact of the data sharing; because the data-sharing is often determined by the scientists’ endogenous decision (i.e., scientists are likely to share data based on the impact on their research productivity or those who expect “reward” such as increased citations to their research work are willing to share their data in the first place), it has been challenging to investigate the causal effect of “public sharing” of research data on the research productivity of data providers. The lack of evidence of the impact is especially notable considering the recent development of various institutional measures to encourage research data sharing as part of the open science movement. Because the success of these efforts hinges on the extent to which the data-sharing scientists gain benefits or encounter disbenefits from sharing their data, the evidence can serve as an important foundation for improving policy to encourage data-sharing practices among scientists. To fill this gap, we investigate the impact of scientists’ public sharing of research data on their research productivity and the underlying mechanisms. Because scientists often make data-sharing decisions at their own discretion, weighing potential benefits (e.g., increased citations to their research) against potential costs (e.g., loss of exclusive publication opportunities), comparing the research productivity of data-sharing scientists to those who did not may introduce bias due to self-selection. Analyzing scientists’ data-sharing under exogenously imposed regulations, such as mandated data-sharing requirements, rather than their own decisions, can help mitigate this endogeneity. To this end, we utilize the policy initiatives by the U.S. NIH to promote research data sharing. In 2008, the NIH started to require investigators to share their data through the Database of Genotype and Phenotype (dbGaP) if the data were obtained through Genome-Wide Association Studies (GWAS) supported by the NIH. This rule was expanded in 2015 by the genomic data-sharing (GDS) policy, which included large-scale genotype, phenotype, and sequence data on human or nonhuman genes generated through NIH-funded studies to be publicly shared via the same archive. As a result, more investigators had to publicly share their data via dbGaP, significantly increasing the number of shared data entries in this archive. Since these policy changes mandated data sharing by NIH, one of the largest funding agencies in the United States, and while individual scientists subject to this rule have limited discretion over their data-sharing decisions, analyzing the impact of scientists’ data sharing on their research performance can help mitigate the typical endogeneity issues described above. Using data from NIH-sponsored research projects that shared data in dbGaP from 2008 to 2020, and their matched projects, we analyze changes in the research productivity of data providers after the data are publicly disclosed through dbGaP. Our fixed-effect (FE) project-year panel difference-in-differences (DID) regression analysis and synthetic control (SC) approach found no evidence of the detrimental impact of publishing data on the data providers’ research productivity, which is consistent with the prevalence of the research diversion effect. Previous studies suggest that the prevalence of the diversion over the competition effect may vary depending on various factors, including the data providers’ stage of career (seniority) and the degree to which other scientists are interested in reusing shared data for research. Our additional analysis found no evidence of heterogeneity in the impact of the data sharing on the data provider’s research productivity by these factors. To directly examine whether the “diversion” effect was behind the null impact, we analyze the textual similarity between publications by data providers and recipients by applying a natural language processing (NLP) method. Our analysis shows that data recipients are inclined to address different research problems from those of data providers, and it moderates the prominence of the diversion over the competition effect.

09:00
Improving research productivity: hindering factors, remedies, and the promise of open science. A systematic review

ABSTRACT. Investment in science, technology, and innovation has grown substantially for decades. For example, OECD countries increased gross domestic expenditure on R&D (GERD) as a proportion of GDP from 2% in 1991 to 2.7% in 2021. This increase is in line with the understanding that R&D is a core driver of economic growth (Bush, 1945; Jones & Summers, 2020; Salter & Martin, 2001), and can contribute to the pursuit of other Sustainable Development Goals (European Commission, 2016; UNCTAD, 2018).

Recent research, however, has reminded us that an increase in R&D expenditures does not translate in a proportional increase in the value of innovations. Research productivity, measured as a relation between inputs and outputs (or outcomes) of research, declines over time, as documented across several sectors (Bloom et al, 2020). In health care, for example, studies show that increased R&D investment have not yielded a proportional increase in new drugs (Garnier, 2008; Scannell et al., 2012). This is considered to be due to some extent to the secular expansion of the knowledge frontier, which increases the difficulty of discovering new ideas (Bloom et al., 2020; Jones, 2009). While the knowledge frontier has been expanding forever, research productivity seems to be declining more rapidly in recent years than in the past. There is though little systematic effort to better understand the potential causes behind changes in research productivity.

This paper seeks to address this gap by systematically reviewing the factors that may hinder research productivity. We do so by means of a systematic review of the literatures analyzing research inputs and outputs across different sectors and fields of science. In addition to hindering factors, we also systematically review the remedies that can contribute to improve research productivity.

Among those, the paper further reviews the role of open science (European Commission, 2009) as a research practice which may increase research productivity, ceteris paribus. Open science has emerged as a new research paradigm that can contribute to better address sustainability challenges that are poorly addressed by established scientific practices. Crucially for this paper, open science practices can directly influence research productivity by reducing the cost of research inputs (e.g. shared data and equipment) (Fell, 2019) and accelerating the achievement of research outputs through collective intelligence (Nielsen, 2013). Despite these potentials, the literature has not linked the role of open science in contributing to research productivity. This paper seeks to address also this second gap.

We conduct a systematic literature review of over 200 selected documents on research productivity. We do this in three phases. In the first phase we group papers by the definition (implicit or explicit) that they use to refer to research productivity. We identify three different approaches and foci, which characterize different research communities. Publications in the scientometric framework define research productivity as a relation between research inputs and pieces of knowledge codified in bibliographic outputs: scientific publications or patents. Publication in the innovation framework define research productivity as a relation between research inputs (e.g. funding, human capital, etc.) and innovation outputs (e.g. technologies, patents, ideas, solutions to problems) or (mostly) economic outcomes (e.g. labor productivity, total factor productivity, etc.). And publications in the societal impact framework define research productivity, as the relation between research inputs, how they are organized or prioritized and potential effects on society.

Next, we focus on the most cited literature within the innovation framework, which has dominated the discussion on the decline of research productivity and its consequences. Using this literature, we study two key issues central to current research policy and practice debates: (i) the factors that may be hindering research productivity; and (ii) the remedies to increase the value of research investments into valuable outputs (e.g. innovations).

We find four main results. First, on the questions about whether research productivity has been declining across sectors, our literature review shows that there is no consensus. 33% of the documents refer to a decline. These studies are mainly focused on R&D intensive sectors. Second, on the question about the main factors that hinder research productivity, most documents discuss factors related to R&D routines. While relevant, factors related to R&D incentives, the fast expanding endless frontier, knowledge recombination and market pressures are less studied. Third, concerning the question about what remedies may increase research productivity, studies largely focus on remedies to improve R&D routines. Remedies related to governance, increasing R&D resources, setting strategies for R&D priorities, management of organisations, and access to human capital are less studied. Fourth, despite the focus on improving R&D routines to address limitations that may hinder productivity, we observe that most categories of hindering factors can be addressed by a combination of several remedies, and each remedy can contribute to addressing several hindering factors.

Finally, we expand our review and include all papers within the innovation framework that analyze the role of open science practices in influencing research productivity. The literature identifies three main open science practices that can influence research productivity: open data, open source, and open collaborations. The main contribution of open science is to improve R&D routines, tackling the most important category of hindering factors. Open science practices, including data sharing and transparent methods, enhance research efficiency, quality, and reliability by promoting resource reuse, collaboration, and the use of digital tools to accelerate research and facilitate discovery. The second most important contribution of open science practices is to increase R&D resources, through combining different sources of knowledge, data and experiences. These practices break down knowledge silos and accelerate research progress by fostering knowledge sharing, collaboration, and collective intelligence, addressing key hindering factors of research productivity. Finally, the last category of remedies has to do with the transformation of research incentives through changed Governance that take on board open science mandates. These mandates of open access and collaboration, foster more effective knowledge sharing and shift research towards exploration and curiosity, encouraging researchers to take on ambitious and risky projects beyond the limitations of traditional incentives.

10:30-12:00 Session 16A: Transformation of the Innovation and Entrepreneurship Ecosystem
Location: Room 330
10:30
The Diverse Research Foci of the Innovation Ecosystem

ABSTRACT. What are the methodological approaches to studying innovation ecosystems across different disciplines? Is the innovation ecosystem becoming a unifying framework across disciplines, or is it divergent as research perspectives diversify? This study explores whether the innovation ecosystem concept converges into a unified theoretical framework or diverges across disciplines. To answer this question, we systematically reviewed 130 articles indexed in the Web of Science database, analyzing articles published over the past decade to identify core trends, theoretical foundations, and methodological approaches. A total of 130 references were identified, we traced 14,439 forward citations. Surprisingly, only one paper among the 14,439 forward citations was found to reference at least one paper from each of the identified thematic areas, highlighting the divergence in the existing literature. Thematic analysis of 130 papers sampled from this group revealed further methodological divergence and diverse research foci between network complexity, complementary forces, and ecosystem performance. Scholars interpret the concept to fit their disciplinary and methodological training rather than engage in cross-disciplinary research.

10:45
Geopolitical Impacts on Biotech Innovation Ecosystem: A Comparative Study of Taiwan and Singapore

ABSTRACT. The shifting geopolitical landscape is creating new challenges for small economies in Asia. At the same time, the competition among major powers adds further complexity to their efforts in managing technology innovation. However, limited scholarly attention has been given to how geopolitics shapes the biotech sector in these contexts. This study addresses this research gap by conducting a comparative case study on Taiwan and Singapore, exploring the impact of geopolitics on biotech innovation in these two Asian economies. Through a mixed-methods approach that includes primary interviews and secondary document analysis, this research identifies three key factors influencing biotech innovation policies and outcomes: risk perception, international institutions, and state-capital relations. The findings show that for small economies like Taiwan and Singapore, geopolitics has become a critical driver in shaping innovation policies and fostering state-firm collaboration. However, differences in how these economies perceive geopolitical risks, nurture international institutional environments, and manage relations between the government and domestic as well as multinational capital have led to divergent strategies and outcomes in biotech innovation. In Taiwan, the heightened perception of geopolitical risk, particularly in light of cross-strait tensions, has driven a policy focus on self-reliance in biotech capabilities. The Taiwanese government prioritizes domestic talent development and technology to enhance national resilience. On the other hand, Singapore adopts a proactive and outward-looking strategy, leveraging its geopolitical stability to attract foreign investments and expertise. By positioning itself as a global biotech hub, Singapore emphasizes strong collaboration with international institutions and multinational corporations. This study offers an innovative analytical framework for understanding how small economies navigate the geopolitics of emerging technologies in high-risk and capital-intensive sectors like biotechnology. It highlights how the interplay of geopolitical risk perception, institutional support, and state-capital dynamics can drive distinct policy choices and innovation trajectories. The research offers practical insights for policymakers in small economies on optimizing innovation policies. It highlights ways to balance national interests with global collaboration to foster sustainable growth in the biotech sector.

11:00
The Transformation of Medical Device Innovation System in Taiwan and India

ABSTRACT. The medical device industry is rapidly growing, with great potential to improve healthcare outcomes worldwide. Taiwan and India have been focusing on building their medical device innovation systems to enhance competitiveness. According to the Multi-Level Perspective (MLP), transitions occur through interactions among landscape, regime, and niche. While specifics vary, niche innovations develop internal strength, and landscape changes exert pressure on the system, creating opportunities for disruption (Geels, 2002; Kemp, Rip & Schot, 2001; Kemp, Schot & Hoogma, 1998; Schot et al., 1994). How do institutional factors facilitate the transformation of the medical device innovation system in Taiwan and India? This research adopts the national biotechnology sectoral innovation system as the main framework. The study adopts a mixed-method approach, combining the in-depth interview and system dynamics. The results show that Taiwan focuses on the development of high-tech medical devices and India on low-cost medical devices. In Taiwan, the medical device industry utilizes technological niches to create opportunities for emerging technologies and interdisciplinary integration, fostering the establishment of actor networks and contributing to transformative changes in the landscape. In India, the government has supported the medical device industry through initiatives such as the "Make in India" campaign and the establishment of the medical devices park to provide a conducive environment. Overall, this paper provides insights into the strategies and initiatives that Taiwan and India have implemented to build their medical device innovation systems and offers recommendations for policymakers and industry leaders further to enhance their competitiveness in the global medical device market.

11:15
The Mutual Shaping of Policy Transformation and the Development of Artificial Intelligence Ecosystem in Taiwan

ABSTRACT. How does the high-tech sector in Taiwan deepen integration within the global innovation ecosystem? How do science and innovation policies strengthen international collaboration to foster sustainable economic and social impact? Artificial Intelligence (AI) has become a pivotal force shaping both economic growth and socio-technical transformations worldwide, especially in the context of rapidly evolving global trends and challenges. Since 2016, the United States, European Union, Canada, and other countries have enacted policies focused on AI research, ethical governance, and competitive strategy. Taiwan has also enacted a series of AI-related policy initiatives, including the Artificial Intelligence Development Basic Act (2019), the Artificial Intelligence Basic Law (2023), and the Artificial Intelligence Fundamental Act (2024). This study explores the AI innovation ecosystem in Taiwan, focusing on how it aligns with and contributes to global science and innovation policy trends, especially in the context of U.S.-China tensions and shifting geopolitical landscapes. This study adopts a mixed methods approach, including scientometrics, social network analysis (SNA), documentary analysis, and system dynamics, to explore the development of AI innovation ecosystem in Taiwan. The study finds that the AI innovation ecosystem in Taiwan is developing rapidly, demonstrating competitiveness in areas like hardware manufacturing and intelligent biomedicine. However, Taiwan faces challenges in enhancing its AI software development capabilities. This paper analyzes the system dynamics among regimes, technology niches, and the landscape of the artificial intelligence ecosystem. Additionally, this research will shed light on the systemic transformation of Taiwan’s technological soft power and economic competitiveness.

11:30
The Transformative Mediating Role of Incubation Centers in the Entrepreneurship Ecosystem

ABSTRACT. How does the innovation and entrepreneurship ecosystem evolve? What's the role that incubation centers play in the emerging entrepreneurship ecosystem? This research investigates the evolution of incubators and start-up accelerators, focusing on their mediating roles in fostering innovation and entrepreneurship while navigating complex policy and system dynamics. Since the late 1990s, Taiwan’s academic incubation centers have transitioned from entrepreneurial space providers to entrepreneurship platforms, bridging research outcomes and entrepreneurial ventures. The study examines three types of start-up accelerators: university-based incubation centers, research institute-based incubation centers, and enterprise-operates accelerators. Combining mixed-method approaches, including 20 in-depth interviews, documentary analysis, system dynamic analysis, and social network analysis, this research explores the transformation of startup accelerators and the mediating roles these knowledge brokers play in the emerging entrepreneurship ecosystem. The results show that Taiwan effectively promoted technological innovation and university-industry collaboration since 1998 by establishing academic incubation centers and technology transfer offices. However, after 2016, shifts in government funding priorities and policy changes forced the incubation centers to transform into start-up accelerators. These policy changes facilitated the dynamic evolution of the entrepreneurship ecosystem. We also found incubation centers alone are not sufficient for incubating high tech startups. In the science based sector, firms to play more effective brokerage roles may be a more successful strategy to develop a science-intensive sector. At the macro level, based on a multi-level perspective framework, this research provides insights to illustrate the mutual shaping of innovation policies and the development of the entrepreneurship ecosystem. The regime plays a central role in responding to landscape pressures, while niches serve as platforms for experimentation and innovation that carry out broader transformations. Ultimately, this research attempts to offer policy recommendations for speeding up knowledge transfer, enhancing academia-industry collaborations, and strengthening the innovation and entrepreneurship ecosystems.

10:30-12:00 Session 16B: Industries and Innovation
Location: Room 233
10:30
STI policy to take on the challenges for realizing the transformative territorial development potential of small-scale rural agro-industries in El Salvador

ABSTRACT. El Salvador, a small and neo-peripheral country in the Global South, faces significant challenges in transitioning toward a sustainable, inclusive, and resilient knowledge-based economy. These challenges are exacerbated by the fragmentation and fragility of its national Science, Technology, and Innovation (STI) system, which lacks territorial reach, inter-sectoral coordination, and connections to international centers of excellence (Arocena & Sutz, 2010; Dutrénit & Teubal, 2011; Szogs et al., 2011). This study explores the transformative potential of small-scale rural agro-industries in fostering territorial development, addressing systemic innovation barriers, and advancing sustainable economic models. It focuses on two central research questions: (1) How can small rural agro-industries overcome structural barriers to innovation? (2) To what extent are science, technology, and innovation (STI) policies addressing these barriers and supporting their development? These questions are discussed in an exploratory way leveraging long-term in-depth case studies of innovative small-scale agro-industries in El Salvador and new findings from research reveling the potentials but also significant limitations of initiatives derived from national STI and innovation for development policies, as well as the important but limited role of key national system of innovation actors. This research applies the novel theoretical and analytical framework being developed by a team of Ibero American researchers in the contexts of the CYTED-LALICS network project, which focuses on STI policies tailored to address national challenges. Emblematic case studies ACOPANELA demonstrates the balance of tradition and innovation, introducing granulated panela to meet dynamic market demands while preserving cultural heritage. By stabilizing prices and supporting family producers, the cooperative strengthens local livelihoods and safeguards the identity of panela production. Similarly, APRAINORES has driven transformative changes through fair-trade and organic certifications, adopting sustainable production practices, and constructing a medium-sized processing plant. These initiatives have not only delivered significant social and economic benefits to members but also addressed regional environmental challenges. Both cases illustrate how small-scale agro-industries have accessed resources and advanced knowledge through collaborations with organizations such as EMBRAPA in Brazil and CIMPA in Colombia. Support from entities like the Inter-American Development Bank’s Multilateral Investment Fund (IADB-MIF) and NGOs has been instrumental. However, the leadership and organizational capacity of local actors, including educated youth, have proven essential in leveraging these resources to sustain innovation and competitiveness (Cummings 2007, 2009, Cummings & Cogo 2016, Cummings & Peraza 2024, Peraza & Cummings forthcoming). Public sector STI capabilities The National Center for Agricultural and Forestry Technology (CENTA) has played a pivotal role in fostering innovation within small-scale rural agro-industries. Through participatory extension programs, CENTA has involved farmers in decision-making, built trust in agricultural innovations, and promoted the adoption of locally adapted technologies. Farmer Field Schools have advanced key practices such as soil conservation, crop diversification, and sustainable resource management, enhancing farmers’ productivity and resilience to climatic and market challenges. Furthermore, CENTA’s gender-focused programs have encouraged women’s active participation in training and governance, embedding gender-sensitive approaches into rural development strategies and expanding the impact of capacity-building initiatives (Hobbs et al. 1997). Despite these achievements, CENTA faces significant systemic challenges. Inadequate funding, fragmented coordination between research and extension, and the absence of tailored public policies hinder its ability to support transformative innovation. Additionally, there is limited infrastructure for value addition and industrialization, which restricts small-scale agro-industries’ access to competitive markets (Hobbs et al. 1997). University contributions and limitations Public and private universities have provided valuable support to small-scale rural agro-industries through applied research and technical advisory services. Targeted projects have addressed critical issues such as crop genetics, soil management, and value-added techniques. However, these efforts remain sporadic and overly dependent on external funding, limiting their broader applicability and long-term impact. The potential for universities to act as connectors between local needs and global knowledge remains underutilized (Cummings 2016). Weak implementation of relevant STI Policies The study also evaluates the broader context of STI policies in El Salvador. The creation of the National Policy for Innovation, Science, and Technology (ICyT) and related initiatives—such as the Vice-Ministry of Science and Technology and the transformation of CONACYT—represented progress. However, these efforts have not been fully implemented or integrated. Structural deficiencies, including a lack of sustained political will and fragmented policy instruments, continue to undermine the sector’s potential. A case in point is the conceptualization of the Agro-Industrial Technology Park (PTA), which aimed to foster collaboration among academia, the private sector, and government. Despite its promise to drive rural innovation through knowledge transfer and entrepreneurship, the initiative relied heavily on external funding from entities like the IADB, which was not secured, leaving the project unrealized (Cummings 2015). Policy Implications and Recommendations The findings underscore the need for a comprehensive STI policy framework that integrates strategies for territorial economic development and productive transformation. Such a framework must prioritize collaboration across academia, government, private sectors, and international partners to leverage resources and expertise effectively. Key priorities include strengthening the territorial reach of STI policies, fostering capacity-building at the local level, and decentralizing resources to empower small-scale agro-industries. To address existing systemic barriers, STI policies must focus on improving coordination between research and extension, providing targeted funding for infrastructure, and tailoring initiatives to the unique needs of small-scale rural agro-industries. By aligning local capacities with global technological advancements and fostering inclusive governance models, these industries can transition from fragmented efforts to integrated drivers of sustainable and equitable development. Initial conclusion Small-scale rural agro-industries in El Salvador hold immense potential to serve as catalysts for environmentally sustainable and inclusive territorial development. However, realizing this potential requires overcoming structural barriers through integrated STI policies and coordinated action. Public institutions like CENTA, universities, and international partners must work in concert to address critical gaps in funding, infrastructure, and policy implementation. By leveraging their resilience, tradition, and capacity for innovation, small-scale agro-industries can contribute significantly to a sustainable rural economy, enhancing national productivity and social equity.

10:45
Too Poor To Make a Difference in Science

ABSTRACT. Introduction The pursuit of improved living standards and economic opportunities in low-income countries has been a longstanding focus of scholarly debate (Peters et al., 2008). Central to this discourse is the recognition that technological innovation and active engagement in scientific knowledge production are vital for achieving meaningful and sustainable development (Yunus, 1998; Whitworth et al., 2008). Yet, empirical evidence consistently demonstrates that high-income countries dominate the global science network, while low-income countries remain increasingly marginalized (Schott, 1998; Ribeiro et al., 2018). Research productivity, which reflects a researcher’s capacity to generate knowledge, is widely regarded as a critical driver of international collaboration (Abramo et al., 2017). However, despite evidence showing that researchers in low-income countries are not consistently less productive than their high-income counterparts (Lee and Bozeman, 2005), it appears that a country’s income level plays an even more significant role in shaping its integration into global scientific networks.

Model To address this phenomenon, we construct a theoretical model of country reputation in the global science network using a Cobb-Douglas function: Ri = φi^(α(Qi)) Qi^β ui^ν

where Ri represents a country's research reputation in the scientific network, reflected by the number of collaborations it can attract. φi denotes researcher productivity, while Qi is the quality of the science system, which reflects a country's capacity to invest in human and physical capital and support scientific social activities, and can be proxied by the country's income level. ui captures other unobserved idiosyncrasies, and α, β, and ν are the input elasticities of the determinants. The reputation function illustrates that the quality of the science system Qi and researcher productivity φi are substitutable in building country reputation Ri. Meanwhile, it also indicates that the impact of researcher productivity φi on reputation is moderated by the quality of the science system Qi. Being part of the exponent of researcher productivity φi, the quality of the science system Qi can enhance or constrain the contribution of researcher productivity φi in attracting international collaboration and building country reputation Ri. Therefore, the substitutability between the quality of the science system and researcher productivity is moderated by the quality of the science system. Consequently, we hypothesize that in low-income countries, researcher productivity does not sufficiently compensate for deficiencies in the quality of the scientific system. This means that countries with a low-quality science system face barriers to participating in international collaborations, regardless of their research productivity.

Data and Method We use journal publication information from Scopus and GDP per capita data from the World Bank. Between 2000 and 2022, we identified 1,965,642 publications from 1,671,837 authors of 184 countries in the field of Business and Economics. To identify potential structural breaks in the substitutability between the quality of the science system and research productivity in shaping a country's reputation in science, we utilize the threshold approach by Hansen (1999). The regression model obtained by log-transforming the Cobb-Douglas equation is: DCit+1 = β0 + β1 GDPpcit + β2·I(GDPpcit ≤ γ) CITit + β3·I(GDPpcit > γ) CITit + β Cit + μi + εit

We use degree centrality (DC) to proxy country reputation in the scientific network, with a one-year lead. GDP per capita (GDPpc) is the indicator for the quality of the science system. Aggregated number of citations (CIT) measures research productivity at the country level. Following Hansen (1999), we include an indicator function I. I equals one if GDPpc ≤ γ (income threshold) and zero otherwise. By incorporating another indicator function I(GDPpc > γ), we differentiate the slope coefficients for CIT. β2 applies to countries with GDPpc below γ, while β3 applies to those above this threshold. C represents the size of the science system, calculated by the number of researchers. Additionally, we include a fixed effect μi. The analysis is implemented at the country level.

Results As a result, countries with higher GDP per capita tend to attract more collaboration partners. However, only countries with a GDP per capita higher than the estimated threshold of USD 713 show a positive correlation between the citations of their researchers and the number of international collaboration partners. For countries with a GDP per capita below this threshold, the citation count of their researchers does not affect the number of international collaborators. In essence, while the quality of a science system is generally positively related to a country's scientific reputation, the relationship between researcher productivity and scientific reputation is more complex. When a country's income level exceeds USD 713, the productivity of its researchers becomes positively linked to its scientific reputation. However, this positive effect diminishes beyond a certain productivity level. Conversely, countries with an income level below this threshold struggle to establish their scientific reputation, regardless of their researchers' productivity. In other words, their research productivity cannot compensate for the low quality of their science system.

References: Abramo, G., D’Angelo, A. C., and Murgia, G. (2017). The relationship among research productivity, research collaboration, and their determinants. Journal of Informetrics, 11(4):1016–1030. Hansen, B. E. (1999). Threshold effects in non-dynamic panels: Estimation, testing, and inference. Journal of Econometrics, 93(2):345–368. Lee, S. and Bozeman, B. (2005). The impact of research collaboration on scientific productivity. Social Studies of Science, 35(5):673–702. Peters, D. H., Garg, A., Bloom, G., Walker, D. G., Brieger, W. R., and Hafizur Rahman, M. (2008). Poverty and access to health care in developing countries. Annals of the New York Academy of Sciences, 1136(1):161–171. Ribeiro, L. C., Rapini, M. S., Silva, L. A., and Albuquerque, E. M. (2018). Growth patterns of the network of international collaboration in science. Scientometrics, 114:159–179. Schott, T. (1998). Ties between center and periphery in the scientific world-system: Accumulation of rewards, dominance and self-reliance in the center. Journal of World-Systems Research, pages 112–144. Whitworth, J. A., Kokwaro, G., Kinyanjui, S., Snewin, V. A., Tanner, M., Walport, M., and Sewankambo, N. (2008). Strengthening capacity for health research in Africa. The Lancet, 372(9649):1590–1593. Yunus, M. (1998). Alleviating poverty through technology. Science, 282(5388):409–410.

11:00
Legitimation Strategy against Socio-economically Disadvantaged Incumbents in a Nascent Industry: The Case of the Korean Mobility Service Industry

ABSTRACT. Nascent industries compete with incumbent industries. In the process of this competition, conflicts may intensify between the nascent and incumbent industries. Recently, the emergence of digital platform technology has amplified this competition. This is because nascent industries based on digital platforms are encroaching on existing markets at a faster rate than ever before. As competition intensifies, it sometimes becomes a social issue. A typical example of this is the mobility service industry. Nascent industry participants launch products and services with new functions that an incumbent industry does not provide. Nascent industries offer customer oriented products and services as a differentiation strategy. These new products and services seem to spread rapidly, but there are still many difficulties for nascent industries to overcome incumbent industries. Incumbent industries have long created industrial and technological systems and infrastructures to suit their respective business models. Consequently, incumbent industry participants have strong market power and can impede the diffusion of new technologies and business models (Adner & Kapoor, 2016; Chang & Wu, 2014; Geels, 2004). The legitimation process plays a key role in the growth of a nascent industry. Legitimation refers to the process by which nascent industry products and services are widely accepted and recognized as legitimate by society (Aldrich & Fiol, 1994). Specifically, it is necessary for the participants in the nascent industry to form their own identity through collective cooperation to be recognized as a visible entity in society (Georgallis et al., 2019; Liu & Guenther, 2024). Furthermore, they can undermine incumbent industry’s legitimacy by giving a negative image to the incumbent industry, thereby creating a de-legitimation effect and gaining public support (Ferns et al., 2022; Thomas & Ritala, 2022). Nascent industry participants utilize a variety of strategies to legitimize a nascent industry; however, their strategies may not be effective in some cases. The effectiveness of a legitimation strategy can vary depending on social and economic conditions of competing incumbent industries. However, prior studies mostly have examined the legitimation strategy against socio-economically advantaged incumbents, referring to a group with strong social status and economic resources. For example, Georgallis et al. (2019) examined that the solar industry, as a nascent industry, was able to significantly increase its legitimacy by competing with a socio-economically advantaged incumbent (i.e., existing fossil fuel energy firms), consequently gaining government support. The incumbent fossil fuel energy industry has strong market and political position so that the government can support the solar energy industry without social criticism. Another example is the case of the finance industry. Prior studies also analyzed how counter parts gain social supports by criticizing socio-economically advantaged incumbents in the finance industry (Budd et al., 2019; Roulet, 2015). However, little research has been conducted on the case of socio-economically disadvantaged incumbents (i.e., a group with weak social status and economic resources). This study aims to analyze the legitimation strategy adopted by nascent industry participants and the counter-strategy of incumbent industry participants to investigate the effects of social and economic conditions of incumbents. In particular, we examine collective cooperation and de-legitimation of nascent industry participants against socio-economically disadvantaged incumbent competitor. For this purpose, we use the Korean mobility service industry as a case study. In recent years, the Korean mobility service industry has experienced the exit of various carpooling services. A prominent ride-sharing service, Tada Basic, also exited the market due to the competition with the incumbent taxi industry. In particular, Tada Basic’s exit became an important social issue in Korea. In the mobility service domain, we set i) the taxi industry as the incumbent industry, and ii) mobility platform industries that provide carpooling and ride-sharing services as the nascent industry. We conducted a longitudinal case study to investigate the events that occurred among diverse stakeholders in the mobility service industry from August 2013 to June 2023 (Yin, 2009). A longitudinal case study based on a historical point in time was deemed appropriate due to the lack of research on the competitive relationships between early players in the industry and new entrants, and the lack of quantitative data as an industry is just beginning to take shape. Our results show that nascent industries (i.e., ride-sharing and carpooling industries) invest in their legitimation process to increase the social acceptance of their new services while they adopt a differentiation strategy by introducing various product innovations to provide their users with new benefits that incumbent industries have not provided. To facilitate this legitimation process, nascent industry participants attempt to develop collective cooperation and undermine the legitimation of the incumbent taxi industry (i.e., de-legitimation) (Ferns et al., 2022; Georgallis et al., 2019; Liu & Guenther, 2024). However, their strategies were not effective against socio-economically disadvantaged incumbents. The current study makes two contributions. First, we analyze the process of collective cooperation when nascent industry participants compete with socio-economically disadvantage incumbent. Previous studies highlighted that nascent industry participants share a common goal of overcoming incumbents, and therefore, it should be relatively easy for them to establish collective cooperation against incumbents (York et al., 2016). However, nascent industry participants’ diverse interests and opinions cannot be converged when they compete with socio-economically disadvantaged incumbents. In this situation, nascent industry participants should adopt a long-term perspective from the beginning of the industry formation and undertake continuous efforts to coordinate their interests (Choi et al., 2011). Second, we analyze the instances in which a de-legitimation strategy works on socio-economically disadvantaged incumbents. Many prior studies demonstrated a merit of nascent industry participants when deploying their de-legitimation strategies on socio-economically advantaged incumbents. However, when incumbent participants are a socio-economically disadvantaged group, for example, the Korean taxi industry, de-legitimation strategies work as a complex social process. In this case, the effects of de-legitimation strategies differ from those observed in the conventional cases analyzed in previous literature. This study shows that nascent industry participants must analyze the social and economic conditions of incumbent industry participants when implementing de-legitimation strategies against incumbent industry participants.

10:30-12:00 Session 16C: Exploration and Growth in Research
Chair:
Location: Room 225
10:30
Assessing the exploration of basic research: An objective ex-ante measurement and project-level influencing factors
PRESENTER: Baicun Li

ABSTRACT. IntroductionExploratory basic research is the key to scientific and technological progress, yet effectively supporting such research remains a core challenge in science funding (Bollen et al., 2013; Hicks, 2012; Ioannidis, 2011). Existing evaluations of basic research exploration can be categorized into two approaches: ex-ante, based on proposal data, and ex-post, focusing on research outputs. Ex-post evaluation has been expanded through scientometric studies, which have deepened insights into related concepts such as novelty, creativity, and innovation (Chen & Ding, 2023; Foster et al., 2015; Huang et al., 2022; Jeon et al., 2023; Lee et al., 2015; Luo et al., 2022; Matsumoto et al., 2021; Uzzi et al., 2013; Yang & Wang, 2024). In contrast, ex-ante evaluation remains largely dependent on subjective peer reviews.There are evidences suggesting that peer review may hinder research exploration, due to biases and noise, thereby affecting the effectiveness of funding (Banal-Estanol et al., 2015; Boudreau et al., 2016). In response, we propose an objective ex-ante measure to evaluate the exploration of basic research through text mining of grant proposals, and further analyze the project-level factors that influence research exploration using CatBoost algorithm and SHAP-based interpretation framework. Specifically, this study aims to address the following research questions:(1) How to measure the exploration of basic research from an objective and ex-ante perspective?(2) What project-level factors influence the measurement results regarding the exploration of basic research, and what impacts do they have?Research framework(1) Measuring the exploration of basic research through text mining of grant proposalsAccording to the theory of knowledge recombination, new knowledge emerges from atypical combinations of existing knowledge elements, and quantifying such atypicality is the key to measuring exploration. Unlike publication data, basic research grants typically lack explicit citation networks, necessitating an alternative approach focused on the research content itself, which is reflected in the titles and abstract texts. Therefore, this study uses text mining techniques to assess the exploration of basic research grants, which includes two main steps:a) Extracting semantic vector and constructing knowledge combination spaceWe begin by utilizing the SciBERT model along with the KeyBERT method to extract the top five bigram keyword-level semantic vectors to represent the research content of each grant. Next, we generate ten combined keyword semantic vectors by averaging the semantic vectors for each possible pair of keywords. Finally, we aggregate all of these combined keyword semantic vectors across the grants to construct a knowledge combination space. This approach is applied to all grants awarded by the NSFC and the NSF from 1995 to 2019, with the data collected from the Dimensions database, following necessary data preprocessing.b) Calculating the exploration for the sample grantsFor each combined keyword vector associated with NSFC-funded grants approved between 2005 and 2019, we traverse the historical knowledge combination space, which is restricted to grants funded within the preceding ten years of the target grant, to identify the ten most similar combined keyword semantic vectors. Subsequently, for each sample grant, we calculate the typicality of the target keyword combinations, using the 10th percentile result as a measure of typicality at the grant level. Finally, we transform the grant-level typicality into an exploration measurement.(2) Analyzing the project-level influencing factors of basic research explorationa) Identifying the feature variables of basic research grantsTaking the NSFC grants as research sample, we identify five key project-level feature variables: grant type, supporting department, grant approval year, type of awarded institution, and status of awarded institution. Each of these variables is further classified into specific categories to facilitate analysis.b) Analyzing the potential impacts on explorationThis study employs the CatBoost algorithm as traditional linear regression models are not well-suited for capturing relationships between categorical variables and dependent variable, and often fail to discern the varying effects of these variables. Specifically, the NSFC grant sample is divided into a training set and a validation set at an 8:2 ratio. The CatBoost regressor is trained on the training set, with the learning rate set to 0.1 and other parameters left at their default values. Since the CatBoost regressor is a “black-box” model, the SHAP (SHapley Additive exPlanations) framework is applied to interpret the regressor on the validation set.Main results(1) The objective and ex-ante measurement of basic research exploration serves as an effective supplement to the peer review mechanism of scientific funds. This indicator performs well in capturing differences in grant exploration across varying characteristics. Additionally, the measurement has reasonable generalization capability and can be applied to assess basic research grants funded by other agencies, with potential accuracy improvements through expanding the sample size.(2) By analyzing the project-level influencing factors of basic research exploration, we found strong effects of grant-specific features on exploration. For instance, grant exploration gradually decreases as the approval year becomes more recent; grants in application-oriented disciplines exhibit higher exploration compared to those in more theoretical disciplines; conventional grant types like Young Scientists Fund and General Programs have higher exploration than large-scale or talent-focused types, while talent-focused grants show the lowest exploration. Additionally, grants awarded to leading public research institutes exhibit lower exploration than those awarded to top universities. These findings are of significance for funding agencies seeking to optimize funding mechanisms.

10:45
Predicting which large and growing areas of research will decline

ABSTRACT. This study focuses on a relatively unexamined phenomenon – the large and historically high-growth research areas that subsequently experience a significant drop in growth rates. This phenomenon is critical to very practical issues such as career planning and funding. For instance, PhD students that join research communities that subsequently decline may not be able to find employment after they graduate. This has happened in some areas of biology and is currently happening in some areas of computer science.

In previous work we have developed methods to predict which research communities (RCs) will experience exceptional growth. RCs are identified in a global map of science created using direct citation clustering of the full Scopus database. In this study, we use a similar model to predict which large and growing RCs will experience a significant decrease in growth rate.

The model used in this study contains 85,160,540 documents clustered into 93,418 RCs using the Leiden algorithm. Of these, 51,785,950 documents from 1980-2018 are Scopus-indexed. The remainder are non-indexed documents that are cited at least twice by the indexed documents, and for which we can retrieve sufficient identifying metadata from the citing records.

Data from the next five Scopus file years (2019-2023) were added to the RCs in this model using the method detailed in Boyack & Klavans (2022). This included not only papers published from 2019-2023, but also earlier papers that had not yet been cited enough to be assigned to an RC. Additional non-indexed documents that were cited by these more recent documents were also added to the RCs. In total, 22,820,168 documents were added, of which 17,844,566 were indexed documents, and 4,975,602 were non-indexed documents. The five-year annual growth rate of each RC from 2018-2023 was calculated using this final model.

For prediction, only the data in the original model (through 2018) was used. Thus, no forward information was used to predict the growth rate that would be experienced as of 2023.

Large RCs were identified as those with at least 1000 indexed publications as of 2018 and at least 100 indexed publications in the year 2018. This resulted in a sample of 6,293 RCs. The CAGR of this group, from 2013 to 2018, was 3%. We further subsetted this group to those with >6% CAGR, resulting in a sample of 2,185 RCs. Two thresholds were then used to identify which of these RCs experienced a significant decrease in growth rate: the CAGR from 2018-2023 had to be less than 6% and the decrease in CAGR (2013-2018 to 2018-2023) had to be greater than 6%. 758 RCs met these criteria and were classified as “large, growing RCs with future decline”.

Fifty-seven independent variables using the 2018 version of the model (to avoid future information) were tested to see which would do best at predicting the 758 RCs that experienced a significant decline in growth rate. These majority of these variables were time series related to size, authors, and relationships between RCs. The best overall predictor was reference vitality, which is described in Klavans et al. (2020). This variable is also the key variable that we used 30 years ago to effectively identify mature fields for SmithKline Beecham (Norling, Herring, Rosenkrans Jr., Stellpflug, & Kaufman, 2000) when recommending which fields were ripe for budget cuts.

The likelihood of randomly nominating a declining RC within this set was 34.7% (785/2185). Thus, to be effective, our predictions must be much better than this. They are. Our predictions were 90% accurate for the top 10 nominations, 80% accurate for the top 20 nominations, and 68% accurate for the top 100 nominations. Additional details about the top 10 nominations are provided below.

CAGR (%) RC Field 2013-18 2018-23 Change Decline Phrase 6686 Bio 24.2 -5.5 -29.7 Yes Draft genome sequence 1897 Comp 6.3 -6.2 -12.6 Yes Versatile video coding 1248 Comp 8.3 -5.9 -14.2 Yes Fiber inter-core crosstalk 1114 Soc 29.8 7.0 -22.8 No Big data 4990 Comp 13.6 -3.2 -16.8 Yes Lossy networks 13645 Med 8.2 0.6 -7.6 Yes Xpert MTB/RIF 1030 Comp 20.0 -0.9 -20.9 Yes Android malware 18416 Chem 26.8 -1.2 -28.0 Yes Graphene aerogels 323 Comp 28.5 -0.7 -29.1 Yes Control plane 4927 Eng 11.2 1.5 -9.7 Yes Li-O2 batteries

We are currently developing thick descriptions for each of these research communities to provide better insights into why they underwent this unusual pattern of extreme growth and decline. For example, the first research community (6686) focuses on the development of bioinformatics tools (such as VFDB, PATRIC, Bactopia, and Bakta) for analyzing bacterial genomes. Significant publication growth is associated with the building of these tools. The rate of publication drops once the tools are created. We are currently investigating whether the literature that cites the use of these tools tends to become dispersed into specialized applications, such as the genome of a probiotic bacteria or the genome of a virulent bacteria in a hospital setting. Detailed descriptions of multiple examples will be given in an oral presentation.

References

Boyack, K. W., & Klavans, R. (2022). An improved practical approach to forecasting exceptional growth in research. Quantitative Science Studies. doi:10.1162/qss_a_00202

Klavans, R., Boyack, K. W., & Murdick, D. A. (2020). A novel approach to predicting exceptional growth in research. PLoS ONE, 15(9), e0239177. doi:10.1371/journal.pone.0239177

Norling, P. M., Herring, J. P., Rosenkrans Jr., W. A., Stellpflug, M., & Kaufman, S. B. (2000). Putting competitive technology intelligence to work. Research Technology Management, 43(5), 23-28.

11:00
Heterogeneity Analysis of Basic Research Undertaken by Universities and National Research Institutions ——Research Based on the NSFC General Program Data
PRESENTER: Yifan Huang

ABSTRACT. 1 Research Questions Universities and national research institutions(NRIs) are key players in conducting basic research. Universities primarily focus on curiosity-driven research while serving as hubs for talent cultivation. NRIs represent structured entities invested in and aligned with critical areas of national development. They embody the state’s strategic intent, engaging in organized, large-scale research activities. The two differ significantly in their strategic orientation, research organization model, and resource allocation approaches. This study aims to analyze the heterogeneity between universities and NRIs in the process of conducting basic research. The study selects the C9 League universities (C9 League), Chinese first alliance of top-tier universities, as a representative sample of universities, and research institutes under the Chinese Academy of Sciences (CAS institutes), the country's largest and most advanced national research institution in the natural sciences, as a representative sample of national research institutions. By examining General Program funded by the National Natural Science Foundation of China (NSFC) from 2009 to 2018 (a total of 30,457 projects) and their associated research publications, this study systematically analyzes the differences in research content and thematic characteristics between these two types of institutions. This study concludes that universities and NRIs follow distinct trajectories in basic research, driven by their heterogeneity, mechanisms, and influencing factors. The findings offer empirical support for optimizing resource allocation and research policies. 2 Methodologies 2.1 Data Sources The input data for this study comes from the funding data provided by Cingta NSFC for the years 2009-2018, while the output data is sourced from academic output data under Dimensions NSFC. Considering the time lag between funding input and output, we utilized data spanning 2009-2024. These data were used to analyze the activity levels of specific research topics and the structure of institutional collaboration networks. 2.2 Data Processing This study employs a Data-driven research approach to extract useful and valid information from extensive datasets. Irrelevant or low-quality data are filtered out, and external databases such as SciVal are matched and linked to construct a comprehensive General Program input-output dataset. Additionally, external indicators of Novelty and Disruptive are introduced to enrich the analytical dimensions and provide deeper insights into the data. 2.3 Indicator Construction This study establishes metrics across three dimensions: Interdisciplinary Collaboration, Topic Features, and Degree of Innovation, to comprehensively measure and analyze the research output and its characteristics. Specifically, the analysis is divided into the following seven indicators: Interdisciplinary proportion, Disciplinary count, Topic number count, Topic number diversity, Topic prominence, Percentile, Novelty and Disruptive. 3 Research Findings 3.1 Input Analysis Earth Sciences: The number of General Program approvals for CAS institutes is approximately three times that of the C9 League. This is attributed to the CAS institutes' access to more advanced experimental equipment, observation networks, and computational facilities, which support large-scale and long-term research projects. These resources also facilitate extensive fieldwork and long-term monitoring of observation station data. Life Sciences: CAS institutes have consistently received more project approvals than universities in this field. This stems from the need for larger, specialized research teams to conduct resource-intensive, high-cost experiments, which heavily depend on deep expertise and resource advantages. Mathematical and Physical Sciences & Information Sciences: The number of the C9 League General Program approved far exceeds that of CAS institutes, which is related to the university's management system. This can be linked to the universities' organizational structure. As single legal entities, universities can effectively coordinate research across faculties and respond flexibly to market demands, enhancing their ability to translate scientific achievements into applications. In contrast, although the Chinese Academy of Sciences is large in size and rich in scientific research resources, its various research units are independent legal entities, which makes it difficult to coordinate and integrate innovative resources. 3.2 Output Analysis In most academic disciplines, the characteristics of basic research projects differ significantly between the C9 League and CAS institutes. Research conducted by the C9 League is characterized by a larger number of topics, greater dispersion, and higher levels of interdisciplinarity and disruptiveness. These traits reflect the universities’ emphasis on free exploration and the pursuit of academic frontiers. In contrast, research at CAS institutes is more focused, with topic clusters that tend to be more niche. This aligns with CAS institutes' mission of conducting mission-oriented research in support of national strategies. 3.3 Results and Conclusions 3.3.1 Results The basic research directions of universities and NRIs demonstrate a high degree of alignment with National science and technology strategy. Their differences in disciplinary focus highlight a strong role complementarity. CAS institutes, leveraging its advantages in equipment, funding, and team resources, has achieved rapid development in fields such as Earth Sciences and Life sciences. In contrast, the C9 League, with their flexible management systems, has shown greater adaptability in Mathematical and Physical Sciences and Information Sciences. This complementarity has significantly supported the comprehensive development of Chinese basic research and provides a valuable reference for optimizing the functional division and collaborative mechanisms between universities and research institutions across different disciplines. Universities prioritize curiosity-driven research, emphasizing interdisciplinary integration and publication impact, while NRIs focus on mission-oriented, application-driven research addressing national needs with field-specific specialization. Together, they form a complementary dual-track system, enriching China's basic research landscape. 3.3.2 Conclusions In terms of policy design, we should strengthen support for curiosity-driven research in universities, and provide a more flexible policy environment for NRIs to carry out task-oriented research; in terms of resource allocation, we should consider the differentiated advantages of universities and NRIs, optimize the funding allocation mechanism, and promote the balanced and coordinated development of basic research. This study highlights key differences in the research content and characteristics of universities and NRIs in basic research. Future studies could expand the data sources, subjects, and scope to further explore their connections and differences, offering insights for the long-term development of basic research and the innovation ecosystem.

10:30-12:00 Session 16D: Energy Technology & Governance
Location: Room 235
10:30
Correlation Analysis of Trust in Government Branches and Acceptance of Automated Vehicles

ABSTRACT. Introduction The automated vehicle (AV) industry is steadily expanding its market size with attracting public attention. As the technology advances rapidly, the situation in which related legislation and institutional arrangements lag behind is problematic. While the public expects AV technologies to reduce accidents, people also feel uncertainty or anxiety, and demand democratic preparations for securing safety, equality, sustainability and privacy. Therefore, I expect that public trust in government that make these preparations is also related to the acceptance of AVs, and state the research question: How do existing institutions lead to greater trust in AVs? This paper conducts a large-scale survey on AVs and applies the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) model to measure correlations (Pavlou, 2003). By identifying the effect of government branches, this analysis preemptively explains the possibility that AV acceptance will be change politically in a democratic society, and that the AV acceptance may vary depending on an individual’s political orientation. Furthermore, it extends the current UTAUT2 model to contribute to the methodology of interpreting AV surveys.

Literature Review What the term AV refers to varies slightly depending on the context, but in this study, AVs are specifically fully autonomous and privately owned vehicles that no longer require a traditional driver. The acceptance of AV has been dealt with as an extension of the existing technology acceptance theory, so it has been closely related with the concept of trust. In general, trust can be defined as the degree to which the target's behavior is expected to be consistent with one's intention. However, Hoff and Bashir (2015) distinguish it into dispositional, situational, initial learned, and dynamic learned trust. As a result, the object of trust is changed to the concept of a system, and the elements of trust are expanded to the product, the manufacturing company that guarantees it, and the national level that manages it. For this paper, the elements that constitute public trust at the government level can be subdivided into the legislative, judicial, and executive branches (OECD, 2017; Trkman et al., 2023). According to the existing papers, it has been shown that using or extending the UTAUT2 model for AV study is reliable in revealing these new correlations (Korkmaz et al., 2022; Nordhoff et al., 2020).

Method South Korea was suitable for conducting a survey because they are not particularly quick in preparing for the era of AVs, and the division between the legislative, judicial, and executive branches are valid. The survey was conducted from October 29 to November 11, 2024, after collecting a sample of 1,864 people through a professional public opinion research institute. Taking into account the population ratio, the data also secured 575 people aged 60 or older, who are generally considered to be vulnerable to transportation system. This study assumes latent variables such as Performance Expectancy, Effort Expectancy, Social Influences, Hedonic Motivation, and Risk in UTAUT2 model for correlation analysis, which is similar to previous studies. The personal innovativeness, subjective safety, and demographic variables such as gender, age, annual driving distance, and political orientation were included in the questionnaire (de Graaf et al., 2019). Most importantly, variables of trust in companies and government branches were added, with the intention of confirming a multi-layered trust structure.

Findings Structural Equation Modeling (SEM) was used for the analysis. Primary findings support previous studies that Performance Expectancy and Social Influences most clearly affect the behavioral intention. Moreover, in addition to trust in AV, trust in manufacturing company and government was significant. However, personal political orientation was an important variable as it affects overall trust in government branches. The technology-oriented attitude called personal innovativeness was also related to behavioral intention.

Discussion and Conclusion The successful extension of the concept of trust to the company and government levels aligns with Hoff and Bashir's systemic trust perspective. On the other hand, it is noteworthy that acceptance fluctuates with the political orientations of the ruling party or individuals. In the case of Korea, the two parties have similar technology-oriented attitudes, so this issue has not yet been politicized. However, this preliminary result reveals the possibility that the ruling party's direction toward AV policy may be treated as a political issue. Conversely, if this does not become a political issue, still there is a risk that autonomous driving technology will be introduced without gaining sufficient trust. In particular, given the demand for a fair system in the judiciary, the legal debate over AVs may undermine the multi-layered trust structure. As the introduction and public acceptance of AVs involve these political tensions, this paper calls for an expanded approach in future research.

References de Graaf, M. M. A., Ben Allouch, S., & van Dijk, J. A. G. M. (2019). Why Would I Use This in My Home? A Model of Domestic Social Robot Acceptance. Human–Computer Interaction, 34(2), 115–173. https://doi.org/10.1080/07370024.2017.1312406 Hoff, K. A., & Bashir, M. (2015). Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/0018720814547570 Korkmaz, H., Fidanoglu, A., Ozcelik, S., & Okumus, A. (2022). User acceptance of autonomous public transport systems: Extended UTAUT2 model. Journal of Public Transportation, 24, 100013. https://doi.org/10.5038/2375-0901.23.1.5 Nordhoff, S., Louw, T., Innamaa, S., Lehtonen, E., Beuster, A., Torrao, G., Bjorvatn, A., Kessel, T., Malin, F., Happee, R., & Merat, N. (2020). Using the UTAUT2 model to explain public acceptance of conditionally automated (L3) cars: A questionnaire study among 9,118 car drivers from eight European countries. Transportation Research Part F: Traffic Psychology and Behaviour, 74, 280–297. https://doi.org/10.1016/j.trf.2020.07.015 OECD. (2017). OECD Guidelines on Measuring Trust. Organisation for Economic Co-operation and Development. https://www.oecd-ilibrary.org/governance/oecd-guidelines-on-measuring-trust_9789264278219-en Pavlou, P. A. (2003). Consumer Acceptance of Electronic Commerce: Integrating Trust and Risk with the Technology Acceptance Model (SSRN Scholarly Paper 2742286). https://papers.ssrn.com/abstract=2742286 Trkman, M., Popovič, A., & Trkman, P. (2023). The roles of privacy concerns and trust in voluntary use of governmental proximity tracing applications. Government Information Quarterly, 40(1), 101787. https://doi.org/10.1016/j.giq.2022.101787

10:45
How to implement mission-oriented innovation policy – The case of the German Energy Research Program

ABSTRACT. Research questions The urgency to transform Germany's energy system has increased due to the Russian war against Ukraine, the tightening of emission reduction, and higher renewable energy capacity objectives. Therefore, the German Federal Ministry for Economic Affairs and Climate Action initiated a reconceptualization process of its Energy Research Program (ERP) in 2023 to address the urgency of the transformation with a more mission-oriented policy approach. The ERP consists of various measures for direct R&D project funding, project funding for living labs, and institutional funding for energy-related public research facilities. Considering this practitioner’s perspective on mission-oriented innovation policy, this study aims to analyze the implementation process, which considers the relevant actors for the ERP and the stakeholders in the form of firms, research facilities, and others linked to the ERP activities via a document analysis and interviews (cf.,Haddad, Nakić, Bergek, & Hellsmark, 2022; Schmidt, 2018). To understand this process for the case of the project funding activities of the German ERP, this study formulated the following research questions: • How can missions be implemented according to the goals of concrete policies? • How can project funding instruments be operationalized and monitored based on these mission-oriented policy goals? These research questions are relevant for practical discussions on mission-oriented innovation policymaking and implementation. From a practical viewpoint, it needs to be discussed how relevant knowledge of the different energy fields and the relevant policy actors within the ministry and for implementing funding calls can be included. Thus, there is a need for a governance concept to implement a successful mission-oriented innovation policy (Ghazinoory, Ranjbar, & Saheb, 2024). Mission-orientation represents a paradigm shift in innovation policymaking, which Daimer et al. (2012) describe as a “normative turn.” Market and system failures focus on funding innovation activities such as basic research with external effects or supporting learning capabilities and interface management to guarantee knowledge flows and interactions (Aghion, David, & Foray, 2009; Daimer et al., 2012; Schmidt, 2018). Wittmann et al. (2024) offer a comprehensive guide to formulating effective missions within policy frameworks. It highlights the importance of precise mission formulation, emphasizing that the success of mission-oriented policies hinges on clear, realistic, and context-specific goals. Though they provide a typology for instruments for different types of missions depending on the breadth and duration of the negotiation process (Wittmann et al., 2024, p. 26), The operationalization, hence implementation, of the missions remains somewhat unclear, as this paper focuses on mission formulation and goal-derivation, while the governance aspect is less thoroughly addressed. In general, as Ghazinoory et al. (2024) show in their literature review, more research activities focus on governance activities for new mission-oriented or transformative STI policies. Though some insights on how specific types of innovations require specific governance modes and what kinds of roles the state can exert with its innovation policy (Borrás & Edler, 2020; Ghazinoory et al., 2024; Reichardt, Rogge, & Negro, 2017), the mission-orientation and transformation research requires more insights into the governance challenges from the practitioners’ perspectives (Haddad et al., 2022). According to Hufty (2011) governance is a formal and informal, vertical and horizontal process of interface and decision-making among the actors involved in a joint problem, which can result in “the creation, reinforcement, or reproduction of social norms and institutions”. Governance is neither prescriptive nor normative nor presupposes vertical authority and regulatory power. This understanding is commonly used to analyze the governance of innovation policy (e.g.,Fagerberg & Hutschenreiter, 2020). A program represents an aligned bundle of instruments following a specified objective. On the contrary, aligning instruments or programs with government missions and other related policies appears to be just one task, which mission-oriented innovation policies require. To account for the interviewees' activities, we used the dynamic capabilities approach for public sector agencies in the innovation policy by Spano et al. (2023) and Kattel (2024).

Methodology The suitable research design for this purpose is a qualitative case study (Yin, 2018, p. 123) grounded on in-depth interviews, aligning with the methodological framework and a document analysis (Miles et al. 2014; Saldaña 2013). Our document analysis examined public and non-public documents, according to Bowen (2009). We used different types of documents, such as public statements by the BMWK.

Findings Two-thirds of our interviews have already been conducted and coded. However, the process couldn’t be finished because the ministry has not implemented the governance framework according to the 8th ERP. The delays resulted from external shocks, such as the shift away from Russian natural gas and the political disruption of the then-coalition government. As of now, the mission orientation demands a breaking of the technological silos in the operating agencies and alignment of processes directed to the missions in the ERP. This ensures a consistent derivation of program goals, subgoals, goals of the measure, and KPIs. Vice versa, this consistency in goal derivation enables the program to monitor, steer, and control alongside the missions. Generally, the ministerial actors faced challenges realigning the traditionally technologically categorized funding calls to the new missions since the entire research community is used to the former structure. Thus, their structures and the project operator’s structure follow the “old” scheme. Thus, the first funding call mirrored this structure and is not aligned with the missions in the ERP. Nevertheless, there is political support for a mission-oriented ERP, and one cornerstone of this character, the board, was installed in October 2024, comprising of scientists, representatives from industry associations, and other stakeholder groups. This board represents the reflexivity regarding the feedback mechanism and the external coordination of the ERP with, for example, the industry. Moreover, there has been uncertainty about monitoring, steering, and controlling the program with suitable KPIs. Still, the process of selecting KPIs, which accounts for the character of a mission-oriented learning ERP, is incomplete. Additionally, there are pilot processes within the operating agency of the ministry to reshape processes and break technological silos. They will be concluded in spring 2025; therefore, the final analysis can incorporate these findings for the conference in May 2025.

11:00
Clean Energy Conservatism: Attitudes Toward Renewable Energy and Nuclear Energy

ABSTRACT. The language used to describe political ideologies varies significantly across nations, reflecting diverse meanings and interpretations. This section explores the spectrum of political ideology groups. Here, the term “conservative” encompasses a blend of ideological stances: a push for minimal government intervention in markets (neoliberalism), adherence to traditional moral values—particularly regarding sexual and marital norms—and policies rooted in nationalism or nativism. Many nations feature both moderate conservative factions and far-right parties, or these perspectives coexist as distinct wings within a broader conservative movement. While moderate conservatives often demonstrate greater openness to energy transition initiatives and exhibit varied views on clean energy, far-right groups frequently dismiss climate science, advocating instead for the continued reliance on coal, fossil fuels, and natural gas (Hess and Renner 2019; Mayer 2019).

This paper discusses the spectrum of clean energy conservatism by analyzing people’s attitudes towards renewable energy (RE) and nuclear energy (NE). Clean energy conservatism has already been discussed in the literature. For example, Lee and Hess (2024) have discussed how conservative party members are connected with fossil fuel industries through lobbying. Therefore, the energy transition, for conservative party members is tightly connected with economic incentives. Similarly, Hess and Pride Brown (2017) found that the key frames for conservative party members that support clean energy tended to focus on job creation, economic innovation, affordability, and tax revenue. Following from this logic, this paper asks the following research question: how is clean energy conservatism displayed in South Korea, and how strong are the connections between the clean energy conservatives and economic benefits?

This paper is analyzing two types of energy, namely RE and NE, and how people’s attitudes towards them tend to change depending on their political inclination. It uses survey data collected in South Korea in 2024. The respondents of the survey were asked to rate their support for RE and NE using Likert Scale. Then, they were also asked to rate to what extent they support RE and NE because of environmental reasons and/or economic reasons. This question is particularly interesting in the context of South Korea because the country has had a strong divide in terms of types of energy support depending on the administration in power – with the conservative administrations providing economic incentives and policy support to NE and suppressing RE, whereas the liberal administrations tended to take the opposite route. Therefore, how clean energy conservatism narratives are played out in South Korea is an interesting dimension that adds to the existing literature.

The preliminary findings indicate that the conservatively identifying individuals tend to prefer nuclear energy compared to renewable energy. Additionally, it found that conservatively identifying individuals supported NE because they believed it to be more beneficial to the Korean national economy and more important for the clean energy transition compared to RE. The findings challenge the prevailing narrative that clean energy conservatism is primarily tied to economic incentives, highlighting instead the complex role of environmental conditions in shaping conservative support for NE.

This study contributes to the existing literature on clean energy conservatism by offering a nuanced exploration of how political ideologies shape attitudes towards RE and NE, with a specific focus on South Korea. Although prior research has established the economic frames that conservatives use to support clean energy, this study broadens the discussion by examining the interplay between economic and environmental justifications for energy preferences.

The findings of this study have significant policy implications for advancing clean energy transitions in politically polarized contexts. Policymakers should recognize the distinct frames through which conservative groups approach energy policy, particularly their economic and environmental motivations for supporting nuclear energy over renewable energy. In South Korea, where political ideologies heavily influence energy policy, designing targeted strategies that align with conservative priorities—such as emphasizing the economic and environmental benefits of renewable energy—could help bridge the ideological divide. Additionally, fostering bipartisan support for energy transitions may require reframing renewable energy policies to highlight their contributions to national economic growth, energy security, and job creation, which are key concerns for conservative groups. These strategies could encourage more inclusive and sustainable energy policies, reducing the reliance on divisive narratives and promoting a unified vision for clean energy development.

10:30-12:00 Session 16E: Responsible and Use-Inspired Innovation
Location: Room 222
10:30
Understanding the Alignment Between New Industry Policy and Climate Policy: Implications for International Trade, Domestic Employment, and Green Transitions

ABSTRACT. The resurgence of new industry policies in earlier industrialised countries, such as the US (Inflation Reduction Act) and Europe (EU’s Green Deal Industrial Plan), has sparked extensive debate among academics, policymakers and the public. These policies are expected to revitalise domestic manufacturing, advance technological innovation, boost domestic employment, and stimulate green growth. However, there is limited systematic evidence on their broader implications for their domestic green transitions, global innovation value chain, international trade and the social and environmental impacts on other regions. Additionally, the extent to which these new industry policies, and the increasingly shift of local protectionism align with domestic climate policies remains underexplored. Understanding this alignment opportunities and challenges is crucial for achieving the deep decarbonisation needed to address the urgent global climate crisis.

The paper therefore asks the research questions: To what extent does the new industry policy in the US align with climate policies, and what are the implications for international trade, domestic employment, and global green transitions? Are there potentials to create synergies, if so, what are the political, social and economic conditions to make it success?

This research will focus on the US electric vehicle (EV) sector, which is at the intersection of domestic industry and climate policies, as well as has wider implications for domestic green transitions and international trade relations. On one hand, it can play a significant role in accelerating its domestic decarbonisation. The EV sector has complex ecosystems that can generate positive transformative changes across different sectors (electricity, transport, digital). On the other hand, it involves complex supply systems, such as the extraction and processing of critical minerals, battery industry that are distributed globally, and thus it has wider implications for international trade and global green transitions.

The study will adopt a case study method, combining qualitative and quantitative data. It will mobilise the semi-structured interviews from policymakers, industry experts, and stakeholders involved in industry policy, climate policy and innovation policy that are relevant to shape the US EV industry development, domestic diffusion and its international trade relations. This is to understand the rationales and strategies among different stakeholders, so that to investigate the social, political and economic conditions to build synergies among them. Moreover, it will gather secondary data from government reports, industry publications, and academic studies to capture the industry dynamics, domestic EV deployment and international trade.

Theoretically, the research will contribute to the recent debate on the role of industry policy in accelerating green transitions. Specifically, it will advance the conceptual discussion on how policy mixes, include science, technology and innovation (STI) policy, industry and trade policy, and environmental and climate policies, can address grand challenges. This debate has been reflected in recent discussions on mission-oriented innovation policy (Mowery et al., 2010, Mazzucato, 2018, Wanzenböck et al., 2020) and transformative innovation policy (Weber and Rohracher, 2012, Schot and Steinmueller, 2018, Diercks et al., 2019). However, there are still limited insights on how these policies are relevant in the current resurgence of industry policy and changing geopolitics. There is a particular need to advance the empirical and theoretical understanding of how the resurgence of local protectionism will impact the global green transitions, and its implications for the innovation policies for grand challenges.

Moreover, this study will contribute to exploring the political and economic realities of implementing successful policies. Policies are not developed in a vacuum; their success relies on the capability to coordinate a wide range of actors, share information, and build trust. This is particularly true when implementing policy instruments with divergent policy goals. Coordination across different government bodies, including the traditional STI policy agency, industry and trade policy agency, and climate and environmental protection agency, is crucial in this context. Additionally, the study will explore to what extent the current trend aligns with the historical legacy of what has been conceptualised as the hidden developmental state in the US (Block and Keller, 2015).

References Block, F. L. and Keller, M. R. (2015). State of innovation: the US government's role in technology development, Routledge. Diercks, G., Larsen, H. and Steward, F. (2019). "Transformative innovation policy: Addressing variety in an emerging policy paradigm." Research Policy 48(4): 880-894. Mazzucato, M. (2018). "Mission-oriented innovation policies: challenges and opportunities." Industrial and Corporate Change 27(5): 803-815. Mowery, D. C., Nelson, R. R. and Martin, B. R. (2010). "Technology policy and global warming: Why new policy models are needed (or why putting new wine in old bottles won’t work)." Research Policy 39(8): 1011-1023. Schot, J. and Steinmueller, W. E. (2018). "Three frames for innovation policy: R&D, systems of innovation and transformative change." Research policy 47(9): 1554-1567. Wanzenböck, I., Wesseling, J. H., Frenken, K., Hekkert, M. P. and Weber, K. M. (2020). "A framework for mission-oriented innovation policy: Alternative pathways through the problem–solution space." Science and Public Policy 47(4): 474-489. Weber, K. M. and Rohracher, H. (2012). "Legitimizing research, technology and innovation policies for transformative change: Combining insights from innovation systems and multi-level perspective in a comprehensive ‘failures’ framework." Research Policy 41(6): 1037-1047.

10:45
Values and the Knowledge-governance Interface: The co-production of Digital Sequence Information governance at the Convention for Biological Diversity.

ABSTRACT. International boundary organizations facilitate exchanges between knowledge and political decision-making, often to tackle wicked problems such as climate change and biodiversity loss. By gathering stakeholder knowledge, needs, and values, and providing a forum for decisions to be negotiated, such organizations have a key role in the co-production of knowledge and social order. While the politics of knowledge production and governance have been studied in the context of understanding and responding to climate change and biodiversity loss, the governance of emerging technologies is an area that remains understudied in the international context. This is important because international organizations are increasingly tasked with addressing concerns around emerging technologies such as artificial intelligence and biotechnologies.

Lessons gleaned from scholarship on the politics of environmental issues, or a glance at the news during a Conference of Parties (COP), demonstrate that such interactions do not unfold seamlessly. Rather, international negotiations are underscored by historical power imbalances and political dynamics that lead to divergence and disagreement. These factors are deeply connected to knowledge production and use challenges, often reflecting North-South divides and associated capacity disparities. In this way, international technology governance diverges from the perspectives on innovation directionality which presuppose agreed pathways toward a common good and ignore the inherent complexity and contestation of sociotechnical objects of governance. Instead, emerging technology governance exhibits characteristics of wicked problems where problem definition, pathways to solving such problems and determining whether they have been adequately addressed are each subject to differing stakeholder needs and values. Therefore, addressing planetary crises and technology governance as ‘matters of concern’ requires academic inquiry into power-laden clashes over knowledge and values that might hinder agreement on innovation policy pathways. This entails theory-driven empirical enquiry, focusing on the underlying causes of such conflicts, and exploring potential pathways toward resolution.

This paper proposes that the Knowledge-Governance Interface (KGI) is a crucial site for investigating how divergence in technology governance is navigated in international Boundary Organizations, therefore, this paper examines a particular instance of divergence as it relates to the Convention on Biological Diversity (CBD) negotiations on benefit sharing related to biological Digital Sequence Information (DSI). Positioning the CBD as a boundary organization provides a structured lens to examine the exchange between knowledge and political decision-making during the CBD’s attempt to negotiate a global benefit-sharing regime based on digital, rather than physical, genetic resources.

The notion of fair and equitable distribution of benefits deriving from digitally enabled bioscience raises value-laden questions such as how to define the benefits of innovation, who should pay and to whom should benefits be distributed? The negotiations broach diverse normative dimensions including indigenous data sovereignty, international capacity building, and reflection on the purpose of biological Research and Innovation (R&I). This paper is grounded in a co-production perspective that knowledge is inseparable from its underlying values and exploring a boundary organization’s KGI can bring this dynamic into focus. The focus on divergence is especially pertinent because such processes are explicitly subject to negotiation, and the issue of benefit sharing has raised questions about stifling innovation, threatening international agreements like the Kunming-Montreal Global Biodiversity Framework and the upcoming World Health Organization (WHO) Pandemic Treaty. This paper adopts a structured framework, grounded in a co-production perspective to investigate the CBD process on DSI, addressing the following question:

• What is the role of the KGI in producing, mediating and overcoming knowledge/value conflicts in international Boundary Organizations as they govern emerging technologies?

The study offers empirically derived insights into the KGI’s formal and informal features, identifying knowledge/value conflicts and highlighting strategies employed by actors to navigate them, also pointing to how these are modified by power relations. This paper also offers recommendations for the process, drawing on insights and critical perspectives in science policy. It does so by operationalizing theoretical concepts derived from literature on KGIs and Boundary Organizations to produce an analytical framework consisting of six analytical categories. Specifically, analytical categories of Membership, Governance, and Boundary Object, as well as those that focus on the interplay between knowledge and decision-making, along with the impact on the broader Knowledge-Control Regime. This was used to make sense of the results of participant observations at five negotiation sites, chiefly the 15th and 16th Conferences of Parties to the CBD (December 2022 & October 2025) and the DSI Open-Ended Working Group (November 2023). This is combined with insights from 35 semi-structured interviews from a purposive, yet geographically representative sample of stakeholders, rightsholders, and government representatives involved in the CBD KGI. Structured notes and interview transcripts were deductively coded to probe the dynamics of the KGI, producing reflections on the role of the KGI, suggesting that it serves as a crucial focal point for attention as we strive to gain a deeper understanding of co-production in technology governance processes.

Empirical work provides insights into how informality functions to facilitate progress while simultaneously risking the exclusion of certain actors and perspectives. Secondly, it underscores that DSI is intentionally left undefined as a 'strategic' boundary object to enable discussions about outcomes without becoming entangled in technical intricacies. Value clashes are also highlighted, contrasting natural scientists' advocacy for open access to DSI based on their political and epistemological legitimacy with states and Indigenous Peoples and Local Communities' assertions of sovereignty over genetic resources. Additionally, the paper reflects on how the norms and ideals of science intersect with questions of fairness and equity, examining the diverse roles and strategies scientists employ to advance their positions.

The analysis extends to the broader bioscience regime, highlighting influences beyond national legislation, including anticipatory institutional responses such as shifts in R&I policy and practice, particularly in database metadata policies. The study identifies dilemmas rather than straightforward problems in international technology governance, concluding that anticipatory and reflexive processes can assist actors to better understand and navigate challenges arising from knowledge and value divergence. This study's insights offer valuable guidance for technology governance and science diplomacy, benefiting practitioners and scholars of emerging technology governance in international organizations.

11:00
Organising mission-oriented innovation policy around systems of use innovation and platforms

ABSTRACT. This paper explores how systems of use innovation, mission-oriented innovation policy (MOIP), and platform organization theory can address societal challenges through effective policy implementation. The specific focus is the green transition, examining the steel and aluminum industries, which together contribute approximately 10% of global CO₂ emissions. The research addresses the following key questions: 1. How can effective ‘mission-oriented innovation policies’ (MOIP) be designed and implemented? 2. In what ways can bottom-up ‘systems of use innovation’ contribute to industrial transition processes? 3. How can a model of effective mission-oriented innovation policy be developed by integrating elements of ‘platform theory’ and systems of use innovation theory?

By addressing these questions, the study aims to contribute to a deeper understanding of policy design and delivery for sustainable industrial transformation. The research methodology encompasses empirical data collection through personal interviews utilizing semi-structured schedules, complemented by data extraction from industry documentation and video presentations. Additionally, an analytical review of pertinent research and policy documents is conducted. Concepts derived from research literature inform and enhance the analysis.

Mission-oriented policymaking typically needs to unify various governmental policy actors to address critical societal challenges. A prominent example is the global push for zero CO2 emission industries, which inevitably involves efforts by multiple government ministries and agencies. However, such efforts frequently lack a nuanced understanding of the needs and incentives of end users and innovators—the key actors whose behavior these policies aim to influence. Without integrating insights from these stakeholders, the complex interaction effects of policy changes remain poorly understood, undermining the overall effectiveness of mission-oriented strategies.

Presentes literature analysis explores the current landscape of mission-oriented innovation policymaking and its limitations, particularly in addressing the systems of use for innovation—an essential framework for achieving successful transitions. The ‘green steel’ initiative serves as a case study to illustrate the gaps in current approaches, demonstrating how end users and systems of use innovation are often inadequately considered. Additionally, the paper examines the adversarial dynamics that can arise between governmental regulators and industries affected by mission-oriented policies. Differing timelines, priorities, and incentives among stakeholders often hinder collaborative efforts, despite the shared objective of societal progress. While governments typically focus mission-oriented initiatives on grand challenges, this paper raises the question of whether a similar approach can effectively address smaller, yet impactful, innovation challenges by fostering more integrated policymaking across silos.

In the context of climate change—arguably the defining challenge of our time—these issues are particularly pressing. Unchecked climate change could cost the global economy USD 178 trillion over the next 50 years, equating to a 7.6% reduction in global GDP by 2070. Transitioning to a sustainable economy requires urgent action, including a decisive shift away from fossil fuels. This paper analyses MOIP and systems of use innovation alongside the platform model, as interconnected tools to drive the green transition. While MOIP embodies top-down state intervention to steer societal transformations systems of use innovation offers a complementary bottom-up approach, fostering solutions at individual, enterprise, and value network levels. The platform model is proposed as a potential framework for organizing mission-oriented initiatives. Through the integration of these perspectives, this paper offers a comprehensive analysis of mission-oriented innovation policy, systems of use, and the platform model as vital components for addressing the green transition challenge. The findings aim to guide policymakers in designing more effective, inclusive, and sustainable strategies for achieving societal transitions.

The analysis uncovered several key findings. It highlights systems-of-use innovation as a critical concept in industrial decarbonization, representing the primary process that must transition to carbon-free operation. The system-of-use owner plays a pivotal role in steering the core process transition. Beyond core industrial processes, such as steel production, the goal of the green transition is to ensure that the entire value network is carbon-free. Moreover, downstream activities, including processing steel into finished products, must also eliminate carbon emissions. A successful green transition in systems of use requires diverse inputs, such as new technologies and processes, fossil-free electricity, hydrogen and iron ore pellets, upgraded infrastructure, enhanced knowledge and skills, adaptive regulations, financing, and captive markets. No single government body or financial institution can address all these needs, as responsibilities are distributed across numerous public and private sector organizations. This often leads to significant coordination challenges.

Platform organization theory provides valuable insights into overcoming fragmentation and coordination issues, particularly within the framework of mission-oriented innovation policy. A platform’s core functions can be defined and crucially they can be supplemented by a variety of optional elements essential for transitioning systems of use to carbon-free operations. In this way platforms can bring together critical resources, regulatory powers, and expertise, enabling multi-stakeholder collaboration essential for addressing complex challenges like de-carbonising of a particular industry. Platforms create shared spaces for interaction, communication, coordination, and goal alignment among diverse actors such as government agencies, research institutions, private companies, and non-profits. By leveraging network effects, platforms accelerate the diffusion of innovations. Their strength in data collection and analysis provides insights that inform policy decisions and innovation strategies, ensuring adaptability to stakeholder needs.

Despite the proliferation of policy-related platforms, a concrete link to system-of-use innovation and associated problem-solving is often absent. Addressing this gap could significantly enhance the effectiveness of platforms in supporting the green transition. By integrating platform organization principles, stakeholders can overcome coordination challenges and drive the systemic innovations necessary for a sustainable, carbon-free future.

10:30-12:00 Session 16F: From Metrics to Policy: The Roles of Open Source Software
Location: Room 236
10:30
GitHub Innovation Graph: Metrics and Data on Open Source Software Development

ABSTRACT. This presentation is submitted as part of the thematic panel titled: "From Metrics to Policy: The Role of Open-Source Software in Science and Innovation" organized by Gizem Korkmaz with Christina Freyman as session chair.

In September of 2023, GitHub announced the launch of the GitHub Innovation Graph, an open data and insights platform on the global and local impact of software developers. In the past, measures of innovation have focused solely on resources like patents and research papers, while policymakers and researchers struggled to find reliable data on global trends in software development. GitHub created the Innovation Graph as a solution.

The Innovation Graph includes longitudinal metrics on software development for economies around the world. This open data initiative was launched with a dedicated webpage and repository, and provides quarterly data on eight metrics dating back to 2020: Git pushes, developers, organizations, repositories, languages, licenses, topics, and economy collaborators. The platform offers several data visualizations, and the repository outlines GitHub’s methodology. Data for each metric (licensed) is available to download (CC0- 1.0).

GitHub’s Innovation Graph will be useful for researchers, policymakers, and developers alike. In research commissioned by GitHub to help design the platform, consultancy Tattle found that researchers in the international development, public policy, and economics fields had interest in using GitHub data but faced many barriers while trying to obtain and use the data. The Innovation Graph aims to lower those barriers. Researchers in other fields will also benefit from convenient, aggregated data that may have previously required third-party data providers if it was available at all.

Promoting digital transformation and well-paid jobs is a key goal for many policymakers. GitHub was encouraged to see research indicating that open source contributions on GitHub were associated with more startups, increased innovation, and tens of billions of euros in GDP. It is anticipated that more readily accessible data will contribute to more (and compelling) research, and ultimately an increase in policies that foster developer opportunity, as well as an increased opportunity for someone to become a developer in the first place.

Developers will be able to see and explore a broader context for their contributions, for example, the ways in which developers collaborate across the global economy, or how a particular language or topic they may be interested in is trending in their local economy or around the world.

GitHub released the Innovation Graph as a data resource for community reuse and is excited to see how policymakers, researchers, and companies explore data trends, use the data to inform research and make beautiful visualizations, and for developers to show how their contributions relate to broader trends.

10:45
Open-Source Metrics: Attributing Credit, Measuring Impact, and Shaping Policy

ABSTRACT. This presentation is submitted as part of the thematic panel titled: "From Metrics to Policy: The Role of Open-Source Software in Science and Innovation" organized by Gizem Korkmaz with Christina Freyman as session chair.

Open-source software (OSS) has become an essential utility in knowledge production and innovation activity in both academic and business sectors for users around the globe. OSS is developed by a variety of entities and is considered a “unique scholarly activity due to the specificity and complexity of scientific computational tasks and the necessity of cooperation and transparency for research methodology.” OSS is produced and distributed with an open license allowing users and unaffiliated developers to inspect, modify, spin off, or submit improvements, and OSS has both a wide range of uses and producers and a relatively low barrier to entry for developers to share work. Moreover, OSS has special benefits for scientific discovery and innovation over commercially available software, due to the flexibility of the solutions that it provides and its transparency that facilitates reproducibility. Influential contributors to OSS can contribute heavily to the priorities and practices of scientific research when their work is widely used or built upon by other researchers. In this context, studying the global distribution, collaboration, and impact of the contributors is important to understanding the landscape of innovation in scientific research.

While the developers of OSS are located in a multitude of countries, many questions remain about who these contributors are, who are the largest contributors (countries, sectors, organizations), and how to measure their impact. As software development faces issues with accurate crediting and citation in academic spaces, this work proposes new measures for capturing the contributions and impact of developers and countries to OSS.

In bibliometrics, attributing credit to author’s publications is typically done by giving each author an equal proportion of the credit, known as fractional counting (i.e., the “1/n” rule (de Mesnard, 2017)), in which each co-author of a work (and the respective country) gets 1/n of the credit, where n represents the total amount of authors. The availability of data on OSS development (i.e., lines of code, contributors’ affiliations and locations) yields an opportunity to attribute credit to authors more accurately.

In this paper, we use data collected on Python and R packages from their respective package managers (PyPI and CRAN), and link these to their repositories on GitHub, the largest source-code hosting platform. We leverage fractional-counting methods from bibliometrics and methods used in the National Science Board’s Science & Engineering Indicators report that publishes trends and international comparisons for publications output (Science-Metrix 2021; White 2019) to measure the contribution of countries to OSS. We measure the exact contribution of each developer (author) by using weighted counting based on the lines of code added by each developer to find top contributors to OSS. We find that for both Python and R, developers from a small group of top countries account for a considerable share of code additions. Contributions from the top 10 countries, which include the United States, Germany, United Kingdom, France, and China comprise 76.1% of the total R repositories, and 66.6% of Python repositories.

We identify dependencies between OSS packages as a metric for studying the impact and influence of particular packages and their countries of origin. Software that lists other packages as a dependency indicates that it relies on or reuses the first package’s code in order to run. Influential contributors to OSS contribute heavily to the priorities and practices of scientific research when their work is widely used or built upon by other researchers. We find that packages attributed to the United States are most frequently reused by packages from Germany, Spain, Italy, Australia, and the United Kingdom based on the total dependency fractions. In parallel, United States mostly uses packages from Germany, France, and Denmark.

Finally, we use the reverse dependency fractions between each unique pair of countries to develop the fractional dependency network. In this network, a directed link from country i to country j indicates that i depends on packages of j and the weight of the edge corresponds to the sum of the dependency fractions of j from i.

In our analysis, we identify two distinct categories within the ecosystem of software packages. The first category consists of large flagship infrastructure OSS that prevail in terms of contributors and impact, serving as fundamental components extensively utilized across various applications. These packages are typically maintained by dedicated organizations. The second category encompasses smaller packages that, while more numerous, are predominantly managed by academic researchers. These smaller packages often originate from research laboratories, where they are developed and disseminated to promote transparency in research methodologies. Currently, while developers contributing to OSS are located internationally, due to the scale of the large flagship projects, most of which heavily feature US based developers, the United States is exhibits the strongest representation as a contributor to OSS.

References: Louis de Mesnard. 2017. Attributing credit to coauthors in academic publishing: The 1/n rule, parallelization, and team bonuses. European Journal of Operational Research, 260(2):778–788.

Science-Metrix. 2021. Bibliometrics Indicators for the Science and Engineering Indicators 2022. Technical Doc. https://www.science-metrix.com/wp-content/uploads/2021/10/Technical_Documentation_Bibliometrics_SEI_2022_2021-09-14.pdf

Karen White. 2019. Publications output: U.S. trends and international comparisons. Science & Engineering Indicators2020. NSB-2020-6. National Science Foundation.

11:00
From Metrics to Policy: The Role of Open-Source Software in Science and Innovation

ABSTRACT. Organizer: Gizem Korkmaz, Westat Chair: Christina Freyman, National Center for Science and Engineering Statistics, National Science Foundation

Abstract: Open-Source Software (OSS) plays a pivotal role in shaping modern science, technology, and innovation ecosystems. The rapid growth and relevance of OSS during the last two decades requires an in-depth analysis of its current role, position and its potential for the global economy. As a freely accessible and modifiable resource, OSS fosters global collaboration, drives innovation, and supports productivity in diverse sectors. However, understanding its development, contributions, and impacts remains a challenge for researchers, policymakers, and practitioners alike.

The panel will provide a comprehensive overview of OSS, highlighting its importance as a driver of innovation, competition, and economic growth in the age of AI. The panel will draw from diverse experiences and interests in science policy, national accounts, and the OSS sector and will discuss the scholastic efforts on defining and measuring OSS - a fundamental intangible asset. Topics of discussion include the measurement of supply side costs to create this software (leveraging methodology consistent with the U.S. national accounts) including the value of OSS development in the U.S. Federal Government, and the demand-side (usage) value created by OSS. Additionally, the session will explore how bibliometric methods (which are used to develop Science and Engineering Indicators) could be leveraged to answer questions about who the OSS contributors are, who the largest contributors (countries, sectors, organizations) are, and how they collaborate with each other. Finally, the session will showcase GitHub Innovation Graph as a reliable data source on software development for economies around the world, providing useful metrics for researchers, policymakers, and developers. This panel brings together four cutting-edge studies that address critical aspects of OSS measurement and its implications for science and innovation policy.

The first paper titled Open-source Software Indicators of Science and Engineering Activity presents updated methodologies and data on OSS contributions across sectors and countries, which will be featured in the National Science Board’s Science and Engineering Indicators. It highlights global collaboration patterns and the increasing role of OSS as a tool for innovation and policy development.

The second paper explores bibliometric approaches for attributing credit and measuring impact in OSS. By analyzing contributions using fractional counting methods and OSS dependency networks, it reveals the global distribution of developers, highlights the contributions of leading countries, and underscores the influence of flagship OSS projects in scientific innovation.

The third paper introduces GitHub’s Innovation Graph, an open data platform offering longitudinal metrics on global OSS activity. This initiative provides valuable insights for researchers and policymakers, enabling them to track trends in OSS development, collaboration, and its broader implications for economies worldwide. The fourth paper delves into the economic measurement of OSS using national accounting methods. It provides a novel framework for quantifying OSS investment and stock as intangible assets, offering estimates of the U.S. economy's OSS-related productivity growth and demonstrating its significance as a driver of innovation and economic development.

Together, these papers showcase the transformative potential of OSS metrics to inform science and innovation policy, enhance economic measurement, and address global challenges. This panel offers actionable insights and robust methodologies for understanding OSS’s role in fostering equitable, sustainable, and innovative ecosystems worldwide.

Paper 1: Open-source Software Indicators of Science and Engineering Activity Presenter: Carol Robbins, National Center for Science and Engineering Statistics, National Science Foundation

Paper 2: GitHub Innovation Graph: Metrics and Data on Open Source Software Development Presenter: Kevin Xu, GitHub

Paper 3: Open-Source Metrics: Attributing Credit, Measuring Impact, and Shaping Policy Presenter: Clara Boothby, National Center for Science and Engineering Statistics, National Science Foundation

Paper 4: From GitHub to GDP: A framework for measuring open source software innovation Presenter: Gizem Korkmaz, Westat

11:15
From GitHub to GDP: A framework for measuring open source software innovation

ABSTRACT. This presentation is submitted as part of the thematic panel titled: "From Metrics to Policy: The Role of Open-Source Software in Science and Innovation" organized by Gizem Korkmaz with Christina Freyman as session chair.

Open source software (OSS) is developed, maintained, and used both within the business sector and outside of it through the contribution of independent developers and people from universities, government research institutions, and nonprofits. Because OSS can be studied, modified, and distributed freely, typically with only minor restrictions (St. Laurent, 2004), this software can undergo fairly rapid innovation and be repurposed across various industries (Raymond, 1999). The Open Source Initiative (OSI) certifies licenses that comply with the principles of open source; any software with an OSI-approved license is considered open source. Notable examples of OSS include the Linux operating system, Apache server software, and programming languages R and Python.

Many OSS projects create long-term tools that are products of public spending. Often, these tools have been developed outside of the business sector and subsequently used within it. While limited, existing estimates of publicly funded OSS suggest its magnitude is significant. For example, by 2017, Apache was estimated to hold the largest market share of active websites (44.5%). The Apache server, developed with federal and state funds at the National Center for Supercomputing Applications at the University of Illinois, is estimated to be equivalent to between 1.3% and 8.7% of the stock of prepackaged software currently accounted for in U.S. private fixed investment (Greenstein and Nagle, 2014). The scale and use of these modifiable software tools highlight an aspect of technology flow that needs to be captured in market measures.

Better measurement supports better policy, and as an intangible asset created and used across the economy, OSS developed by applicable sectors should be fully accounted for in gross domestic product (GDP). Economic measurement that is integrated with the overall accounting of goods and services produced in the economy provides the basis for understanding the impact of OSS on sector-level productivity as well as overall productivity. Understanding the role that each sector plays in developing, funding, and promoting OSS can help inform public policy that supports innovation and economic growth. While the potential for software innovation to support economic and productivity growth and transform various sectors of the economy is indisputable, measurement of innovation in software has been limited, particularly for OSS. Despite its widespread use, there is no standardized methodology for measuring the scope and impact of this fundamental intangible asset In this paper, we provide an approach to measure it.

This study presents a framework to measure the value of OSS using data collected from GitHub, the largest platform in the world with over 100 million developers. The data include over 7.6 million repositories where software is developed, stored, and managed. We collect information about contributors and development activity such as code changes and license detail. By adopting a cost estimation model from software engineering, we develop a methodology to generate estimates of investment in OSS that are consistent with the U.S. national accounting methods used for measuring software investment. We generate annual estimates of current and inflation-adjusted investment as well as the net stock of OSS for the 2009–2019 period. Our estimates show that the U.S. investment in 2019 was $37.8 billion with a current-cost net stock of $74.3 billion.

This study makes several important contributions It fills a measurement gap by providing a novel approach to measuring investment in OSS. It thus contributes to the economic measurement literature by developing a methodology consistent with those of national accounts that measure software investment in the United States. We adopt a cost model from software engineering and apply it to GitHub data. More specifically, our methodology draws from the current economic measurement of own-account software — software created using internal resources as opposed to purchased or outsourced. Based on the development cost, we extend this cost measure to the measurement of OSS as a useful asset used in production.

The paper provides estimates of U.S. annual OSS investment based on the prices prevailing during the period the investment took place (nominal) and adjusted for inflation and quality changes (real) which allows for comparisons across time. We also provide the corresponding net stock estimates (i.e., the cumulative value of the asset) for the 2009–2019 period. Although the totality of OSS is unknown, we believe our estimates account for a significant portion of OSS development and make a significant contribution to a growing literature on measuring OSS innovation poorly captured by traditional approaches. These OSS measures complement existing science and technology indicators on peer-reviewed publications and patents that are calculated from databases covering scientific articles and patent documents. In addition, with these estimates, we aim to contribute to the understanding of productivity and economic growth, both within and outside the business sector, and encourage further research into the importance and contribution of OSS in the digital economy.