ATLC25: 2025 ATLANTA CONFERENCE ON SCIENCE AND INNOVATION POLICY
PROGRAM FOR FRIDAY, MAY 16TH
Days:
previous day
all days

View: session overviewtalk overview

08:30-10:00 Session 13A: Transformation in the Lab: AI, Automation, and Digitalization
Chair:
Location: Room 236
08:30
Bridging or Widening the Gap? The Role of AI in Shaping Global Research Performance

ABSTRACT. ***Submitted as part of the thematic panel “Transformation in the Lab: AI, automation, and digitalization.”***

The rapid development of generative AI (GenAI) technologies has the potential to transform scientific research practices, with enhanced capabilities for data generation, processing, and analysis. However, there is limited understanding of how GenAI adoption will influence existing disparities in scientific productivity between the Global North and the Global South, regions historically characterised by significant inequalities in access to research resources, funding, infrastructure, and technological tools. This study aims to investigate the impact of GenAI adoption on scientific practices, comparing the Global North and Global South and assessing whether GenAI adoption is reducing or widening the gap in research performance between these regions.

GenAI has introduced new efficiencies and methodologies that allow for more rapid and sophisticated knowledge production. It holds the potential to overcome manpower, infrastructural and funding limitations by enabling access to cutting-edge data analysis capabilities and facilitating more efficient workflows, which is more beneficial to Global South where resources may be more limited. On the other hand, the increasing use of GenAI also raises critical concerns about accessibility; advanced AI tools often require significant computational resources, stable internet infrastructure, and institutional support, factors that may be readily available in the Global North but are scarcer in many Global South contexts. This disparity could lead to unequal benefits from GenAI and affect researchers in different regions differently.

The primary research questions of this study are: 1) How has the adoption of AI and GenAI tools influenced the research practice globally? 2) Has GenAI adoption widened or narrowed the research performance gap between Global North and Global South? To answer these questions, we start by identifying publications involving AI and GenAI based on targeted keywords, and mapping research activities by institution, country, and region. Research practice will be examined across four dimensions: research output (number of publications), research impact (citation counts and journal reputation), research topics (disciplines and thematic focus), and collaboration pattern (number of coauthors). Subsequently, we compare these research practices across regions, with a particular focus on differences between the Global North and Global South.

Methodologically, we conduct a comprehensive bibliometric analysis using a dataset of 1.6 million publication records during the period from 2021 to 2024. We divide this dataset into two phases: 2021-2022, capturing early AI-related research through targeted keywords; and 2023-2024, incorporating both AI and GenAI-related keywords to reflect the post-2023 GenAI adoption surge. Data is sourced from Web of Science’s Science Citation Index Expanded (SCIE) and Social Sciences Citation Index (SSCI) databases. These databases provide extensive coverage across disciplines, enabling a robust analysis of publication patterns on a global scale. To classify countries into the Global North and South, we follow the OECD definitions, using the affiliation of the corresponding author as the primary indicator of regional classification and the first author’s affiliation as a robustness check. The timing of this research is particularly relevant, as the rapid expansion of GenAI use in academia since 2023 provides an opportunity to empirically investigate these questions during the early stages of GenAI diffusion across the global research landscape. This study builds upon existing literature on the global digital divide in research and the impact of AI on scientific productivity. Our contributions are twofold. First, by tracking the GenAI adoption and quantifying regional disparities, we contribute to ongoing discussions about technological equity and the future of global scientific practice, specifically focusing on the role of AI in potentially reinforcing or reducing existing inequalities. Second, by examining multiple facets of research output, our study offers a comprehensive understanding of how AI adoption affects various aspects of scientific work.

The findings from this study have important policy implications. If GenAI adoption indeed widens the productivity gap, targeted interventions may be necessary to support AI access and infrastructure development in the Global South. By identifying both the opportunities and challenges associated with GenAI in global research, this study aims to inform strategies that promote a more equitable distribution of technological resources and benefits in scientific practice worldwide.

08:45
Balancing Bytes and Beakers: Skill change in the digitalisation of industrial science

ABSTRACT. Submission as part of the proposed panel: TRANSFORMATIONS IN THE LAB: IMPLICATIONS OF AI, AUTOMATION, AND DIGITALIZATION IN SCIENCE

The purpose of this paper is to investigate skill change in industrial science in the context of digitalisation and automation. Industrial scientists mostly conduct applied research to address their firm’s market needs and financial goals (Aghion et al., 2008; Perkmann et al., 2019) such as developing new products for commercialisation (Agarwal & Ohyama, 2013) or solving particular problems faced by their companies (Shapin, 2008; Perkmann et al., 2019). The skills required by industrial scientists are increasingly influenced by the digital transformation of scientific practice, this includes the use of advanced laboratory automation, such as robotics, which is tasked with conducting large-scale, everyday manual tasks such as pipetting and assaying, while software performs virtual experimentation, data analytics and in silico or computer modelling. These developments include the increasing use of artificial intelligence (AI) (Lamb and Davidson, 2005; Olsen, 2012; Riberio et al, 2023).

We draw on a study conducted in a large UK firm to shed light on the transformations in the ‘texture of work’ of scientists. We collected data through 57 semi-structured interviews with scientists and managers between 2019-2021 as part of a larger project focused on the adoption of new technologies by scientists and associated managerial strategies in the context of organisational and technological change. The firm employs over 300 scientists working on the design, formulation and testing of fast-moving consumer goods for the hygiene and personal care markets. The company has made a major investment in automation and digitalisation of R&D with the aim of “better, cheaper, faster” in silico first new product development (i.e. developing new products primarily by means of computer modelling or computer simulation rather than physical experimentation). The adoption of digitalisation is posing skills challenges for the company as new skill needs have emerged not least in data analytics and automation engineering. The company’s response has been the recruitment of new staff and the upskilling of existing staff through formal training programmes and on-line training resources.

We critically engage with labour process theory (LPT) and contribute to the upskilling-deskilling debate in management and the sociology of labour literature (Omidi et al. 2023). The traditional deskilling hypothesis suggests that new technology leads to the breaking down of complex skilled work into simple unskilled tasks that reduce the autonomy of workers. We find that the traditional deskilling hypothesis is limited and that, faced by new digital technologies and automation, scientists simultaneously experience deskilling and upskilling. We also note that automation and the growing importance of multi-disciplinary teams are impacting the autonomy of individual scientists. Further, we observe that self-guided and experiential learning plays an important role in digital skills development.

09:00
Searching for theory? Researchers’ perspectives on artificial intelligence and machine learning in manufacturing and materials science research

ABSTRACT. For proposed panel: "Transformations in the Lab: Implications of AI, automation, and digitalization in science"

Artificial intelligence (AI), including machine learning (ML), has been heralded as transformative for scientific research and development. Amidst a long-run decrease in ratios of economic growth to scientific investment, and concerns about quality and replicability in science, proponents argue that AI tools will accelerate scientific discovery, technological development, economic growth, and the development of solutions to global challenges.

In science, AI technologies are increasingly being applied to experimentation, data collection and analysis, and automated lab operations, as well as to scientific writing and proposal development. But will such changes increase the rate of scientific progress? AI-enabled automation promises to increase laboratory throughput because smart machines can perform production or processing tasks more quickly than humans. However, lab automation can amplify and diversify other mundane knowledge tasks, reducing anticipated productivity benefits. Moreover, science is not a simple commodity good. Increasing the rate at which experiments are performed or papers are written does not inherently constitute an increase in scientific progress. A key aim of science is to develop new capacities for prediction or intervention, typically achieved by iterative trial and modification of theories or technologies. Theory is particularly useful to science-driven technological development because it permits “offline trial” of prospective interventions as a faster and cheaper alternative to live trial. However, it is unclear how AI will enable the pivotal aspect of theory development in science to be accomplished more rapidly or effectively.

One particularly salient area of research and development impacted by AI is manufacturing and materials science (MMS). Manufacturing researchers have applied ML-based tools for decades, but new methods and increasing computational capacity have led to a boom in recent years. Accordingly, MMS provides an excellent domain in which to investigate the evolving implications of AI in research and development. This paper reports results from 32 in-depth, semi-structured interviews with MMS researchers on their experiences with AI in engineering research. These researchers are all faculty, doctoral students, or recent doctoral graduates of the ten U.S. universities most productive of manufacturing AI journal and conference papers over the last five years. All are currently pursuing or have recently completed manufacturing research projects using AI. Interviews focused on participants’ experiences with AI in MMS research, including effects on knowledge production and dissemination, skill and resource requirements for research, career development.

Participants primarily reported using AI for data analysis and for construction of predictive tools. They were cautiously optimistic about effects of AI in MMS research, stating that it permitted them to work on problems which could not be practically addressed with other analytical approaches; to investigate phenomena affected by large numbers of variables; or to analyze data more quickly and efficiently than they could otherwise. Participants suggested that AI is most useful as a complement to modeling and analysis approaches based on longstanding physical theory—either in discerning data features for further investigation, constructing computationally efficient approximations to well-established but computationally expensive theory-based models, or empirically “filling in the gaps” in areas where current theoretical understanding is weak. A few respondents suggested that AI’s scope of application in materials science would narrow as explicit theory advanced, while others worried that overuse of AI could stymie development of fundamental theoretical understanding over the long run. Importantly, several participants noted that AI methods altered the form of research communication, transforming papers and articles into no more than an “extended abstract” for datasets and code.

Participants noted that AI, even when it yields computationally efficient models, requires large quantities of data and computation time to develop those models. Many participants repeated the research adage of “garbage in, garbage out,” indicating that ML tools have limited power to extrapolate beyond the datasets on which they are trained; and that large, high-quality, datasets, particularly drawn from real, proprietary production processes, are hard to acquire. Participants stated that use of AI requires additional skills compared to prior forms of research, but that it does not remove the need for any prior knowledge or skills. Although AI permitted them to perform projects which they otherwise could not, they did not indicate that AI or ML allowed them to conduct research more rapidly or reduced the amount of labor involved doing so (though some anticipated it might in the future). Perhaps most interestingly, some participants emphasized the vast disparity in computational resources available to universities and to private industry. Some argued that academic researchers had to find niches to remain relevant in an age of large-scale private sector AI. Others suggested that universities needed to do more (perhaps collaboratively) to provide large-scale computational resources on par with industry to their researchers.

These interviews offer an insightful complement and contrast to high-level policy narratives about AI, and, indeed, public-facing statements by science advocates about AI’s effects. Participants are, generally, most excited about AI as a modeling tool. Conventional theory permits trial of modified technologies (e.g., alloys) without real-world implementation and evaluation, speeding the search process for useful alternatives. ML extends this utility. ML can provide a more computationally efficient surrogate for preexisting theoretical models. In some cases, ML can substitute for explicit theory when no such theory is available. Accordingly, it seems plausible that AI will help to increase the rate at which technological opportunities can be identified and exploited, at least within existing paradigms. However, most interviewees were more skeptical that AI will increase the rate of progress in scientific theory itself. It is yet unclear whether AI can assist with theory development in poorly theorized areas—or whether it may hinder or replace explicit theory development. This study illustrates a need to more finely parse hopes for AI-driven accelerations in scientific progress, suggesting a promising case for technological acceleration within well-defined paradigms but a much more ambiguous picture for novel scientific theory.

09:15
Rise of Generative Artificial Intelligence in Science

ABSTRACT.  

The rapid advance of Generative Artificial Intelligence (GenAI) has garnered significant attention within the scientific community, heralding a potential paradigm shift in research methodologies and scholarly publishing (Charness et al. 2023; Lund et al. 2023). There is already a substantial uptake of GenAI tools among researchers, with many leveraging these technologies to brainstorm ideas and conduct research (Van Noorden & Perkel 2023).

However, the deployment of GenAI remains fraught with ethical and epistemic challenges. Generative models are prone to produce erroneous or fictitious outputs (“hallucinations”) (Jin et al., 2023). Moreover, the non-transparent nature of proprietary generative AI systems, exemplified by ChatGPT’s closed-source architecture, raises fundamental questions regarding intellectual property rights and ethical responsibilities in scientific research (Liverpool 2023).

The integration of GenAI into scientific research has sparked debate, raising fundamental questions about its potential benefits and drawbacks. This paper addresses three critical research questions to better understand the diffusion and impact of GenAI in scientific domains. The first question explores how GenAI is diffusing into the sciences. Understanding GenAI’s adoption within and across scientific fields offers insights into its scientific shaping role.

The second question is how GenAI influences teams in scientific research. Using GenAI tools like ChatGPT to assist with writing and other tasks has been associated with improved productivity (Noy & Zhang 2023). Conceivably, this might lead to reduced team sizes. Concerns have been raised about the potential of GenAI to replace jobs, including those traditionally held by human researchers (Kim 2023). However, what aspects of scientific authorship could be replaced by GenAI remains largely unexplored. Moreover, as scientific research becomes increasingly specialized and collaborative, with large teams often required to tackle complex problems across various disciplines (Venturini et al. 2024), it is also conceivable that GenAI might lead to expanded team sizes. In short, the impact of GenAI on team size and composition is still unclear. Understanding whether GenAI might reduce the need for large, diverse teams by automating certain roles, or conversely, necessitate even larger collaborations, is important for anticipating future research dynamics and human resource implications.

The third question is about the potential of GenAI to influence international collaborations. International collaboration is often associated with high-quality research outcomes and increased citation rates (Wang et al., 2024). However, geopolitical standoffs between major research performers pose new challenges to global scientific collaboration (Jia et al, 2024). In this context, examining how GenAI (which has risen contemporaneously with recent global tensions) might either bridge or exacerbate these divides is particularly timely. The intersection of GenAI with international collaboration offers a rich avenue for understanding the broader implications of GenAI in a rapidly changing global landscape.

To address these questions, this paper presents an exploratory bibliometric analysis of the rise of GenAI in scientific research. Using OpenAlex, which provides comprehensive scientific publication metadata (Priem et al. 2022), we analyze over 13,660 GenAI publications and 517,931 other AI publications to investigate the characteristics of GenAI compared to other AI technologies. We profile growth patterns, the diffusion of GenAI publications across fields of study and the geographical diffusion of scientific research on GenAI. We also investigate team size and international collaborations to explore whether GenAI, as an emerging scientific research area, shows different collaboration patterns compared to other AI technologies.

Our initial exploratory analysis reveals that the application of GenAI in scientific research has expanded well beyond its origins in computer science. While early developments in GenAI were predominantly concentrated within the computer science field, we now observe a broader diffusion of these technologies across a diverse range of scientific disciplines. This cross-disciplinary adoption suggests that GenAI is emerging as a general-purpose tool for enhancing research methodologies, accelerating discovery, and addressing scientific challenges in a variety of fields. Additionally, we find that the US has more rapidly adopted GenAI in science fields compared with China (through to 2023). China is highly productive in papers that use other AI methods, reflecting its high investment in established AI technologies. In contrast, the US, now with a lower publication output than China in other AI papers, has demonstrated a rapid shift in focus towards using GenAI in science. This may reflect the flexibility and dynamism of the research and innovation system in the US, supported by national AI initiatives and partnerships between government, academia, and industry aimed at maintaining global leadership in this critical area. While China also has long-term strategies for AI research and innovation leadership, the lag in China’s GenAI research output might indicate that its research institutions are still building expertise in this area.

Early evidence suggests that research teams focusing on GenAI tend to be relatively smaller compared to those working on other forms of AI. This reduction in team size may, in part, reflect the increasing productivity enabled by advances in GenAI tools and techniques. The ability to achieve significant outcomes with fewer collaborators may indicate that individual researchers or smaller teams are now able to manage and innovate more effectively, leveraging these powerful tools to generate substantial results with reduced effort and coordination. Despite the trend toward smaller team sizes, researchers continue to actively seek international collaboration. Even in the face of rising geopolitical tensions, the level of international cooperation in GenAI research remains on par with other AI fields, suggesting that the scientific community recognizes the value of global collaboration in advancing GenAI technologies.

GenAI is still at an early stage in its evolution, with its full implications for science still to unfold. Our study offers an exploratory and formative assessment of the current positioning of GenAI in science and hints at some of the patterns that appear to be emerging. Further opportunities and promising pathways for research on GenAI in science will be highlighted, along with consideration of implications of our findings for science and science policy.

(References omitted in this abstract)

08:30-10:00 Session 13B: Equity & Inclusion in Innovation
Chair:
Location: Room 233
08:30
Drug accessibility in the European Union: evidence from Supplementary Protection Certificates

ABSTRACT. This paper studies how intellectual property rights create incentives for pharmaceutical companies to bring new drugs to market. It focuses on a specific regulation governing the protection of pharmaceutical products in the European Union (EU). Here novel drugs are compensated with extra years of patent protection through a Supplementary Protection Certificate (SPC) if their development times are longer than five years. This type of extension exists also in the US where a firm has to approach the Food and Drug Administration (FDA) to consider their patents for a Patent Term Extension (PTE). Conversely, in the EU, the firm has to approach every EU member state’s regulatory body for the same task.

This paper exploits the heterogeneity of the SPC decision across EU countries, investigates the pharmaceutical firms’ decision to apply for SPCs in different EU markets, and, ultimately, whether SPCs generate greater accessibility of novel drugs across EU member states.

The invention and development of novel drugs differ from other inventions in a few particular ways — drugs require hefty fixed costs, involve clinical trials, and consequently take longer to develop (Scherer, 2010; Lakdawala, 2018). A novel drug can only come to the market when the product has been deemed safe for human consumption by the regulatory bodies of the respective countries. However, clinical trials combined with regulatory approval are time-consuming and have been increasing over time. For instance, the mean duration of drug development has climbed to about 10 years (DiMasi, 2014; Kyle, 2017). However, the patent clock starts as soon as the patent has been filed for the respective novel drug. It therefore leaves approximately 10 years or less of effective protection for a novel drug, in contemporary times. The longer a drug takes to reach the market, the shorter the patent protection would be and this is a disincentive to invest in novel drugs. Here is where SPCs come into play.

SPCs were introduced in 1992 and it came into force in January 1993 in the EU. New member states introduced SPC as and when they ascended into the EU. However, SPCs are not uniform across all member states. Patents covering drugs may be granted SPC in one member state while rejected in another (Mejer, 2017).

While SPC protects a drug by extending the patent(s) life involved in the drug, they are not the only way to achieve exclusivity. The longest exclusivity a drug receives is still provided using a patent (20 years), while the lowest duration of protection a drug can receive is through Market Protection (MP) and Data Protection (DP) (a total of 10 years). The exclusivity that the drug receives from the European Medicines Agency (EMA) is independent of the patent protection the drug receives through its underlying patents. If a drug receives protection that is lower than 15 years but more than or equal to 10 years, it is in between this, that an SPC becomes a viable option for a firm. On average, the effective protection period dropped from 15 to 13 between 1996 and 2016 (Copenhagen Economics, 2018). This duration implies that most pharmaceutical patents have to be protected by SPCs in order for them to receive protection that is more than 10 years.

Even though SPCs have existed in the EU for some time, we do not have a clear understanding of what the SPCs have incentivized, if any, in the EU. The only authoritative studies that we find in this regard are by Kyle (2017), Mejer (2017), and Copenhagen Economics (2018). One theme that emerges from all the studies is the following: there exists substantial heterogeneity among EU member states in the granting and refusal of SPCs. For example, Mejer (2017) studies 740 drugs that were authorized between 2004 and 2014 and the author finds 26 percent of the SPC applications associated with the drugs to be granted in one EU member state while being either rejected or withdrawn in others. This heterogeneity among EU member states seems puzzling. Why would a firm withdraw its SPC application in some geographical areas and not in others? Our paper is an attempt to disentangle this heterogeneity and consequently study the accessibility of novel drugs.

We rely on SPC refusal decisions from different EU member states’ regulatory bodies to identify outcome changes before and after the SPC refusals. We consider the decision to file a patent and SPC exogenous prior to the decision of a state to refuse or reject SPC applications exogenous. While EU member states have implemented SPCs at different points in time, such laws have generally come into force as a package of legislation during states’ ascension into the EU. This packaging confounds and it is difficult to disentangle legislations’ individual effects empirically. Whether the decision of a firm to apply for SPC is based on the availability of SPC in an EU member state, or because of the other laws that came because of their ascension is unknown. Our strategy of using SPC refusals within a state absorbs all the variations for that particular state, allaying the confounders.

An example to illustrate our strategy is the following: EPO patent 174726 is associated with SPCs in five EU member states Austria, Belgium, France, Switzerland, and the United Kingdom. The SPC was filed in the following order in different states by date: Belgium, the United Kingdom, Austria, Switzerland, and France. In some countries, such as Austria, France, and the United Kingdom, three SPCs were filed while in Switzerland, only one was filed. In France, all the related SPC applications were refused, while in Austria and the United Kingdom, one application was granted while the others were refused. We assume that firms cannot preempt these refusals prior to filing an SPC application in the respective country, and thus such SPC refusal decisions are considered shocks to a firm-country pair. Using this variation, we estimate the changes in the intensive and extensive margins of various measures of innovation.

08:45
Policy Implications of Skill Changes under Digital Automation: A Processual Approach with the Case of the Platform Economy

ABSTRACT. Industry 4.0 and 5.0, initiated by advanced information and communication technologies, platform algorithms, and other smart technologies, present significant challenges to skill formation and practice in the workplace. Such challenges are on the one hand comprehensive, as it is widespread across all types of tasks, work organizations, jobs, and sectors. On the other hand, they are complex, as smart technologies not simply replace old or generate new skills, but require subtle and nuanced human-machine interactions from different perspectives and in various degrees. Such comprehensiveness and complexity necessitate policy interventions to guide, protect, and encourage frontline service workers and their skill formation in new forms of work. However, the existing policy paradigm has difficulty in addressing these issues because it tends to hold linear and unrealistic assumptions of skill upgrading and overlook the situations, needs, and values of frontline service workers.

Informed by this requirement, this research proposes a processual approach to understanding skill changes driven by Industry 4.0 and 5.0 and establishing a robust foundation of skill-related policies. Following an inductive theorization strategy, the processual approach conceptualizes work as a series of events in which workers and/or technologies make judgments and take actions to move the process forward. With this generic conceptualization, the approach investigates whether and how (1) technologies trigger radical changes in the types, sequences, and numbers of events in work processes; (2) technologies engage with, shift, interrupt, and/or restrict the judgments or actions made by workers in each event; and (3) technologies transform the relations between judgments and actions in each event.

The processual approach has three advantages. Methodologically, it adopts generic conceptual tools applied inductively, avoids pre-assigned and hierarchical categorization of skills, and requires researchers to base their analysis on solid case-by-case empirical investigations, thus addressing the complexity of challenges imposed by the current automation. At the explanatory level, its generic conceptual framework makes it applicable to a wide spectrum of work, especially service work, thus addressing the comprehensiveness of such challenges. Also at the explanatory level, it adopts a symmetric view of the role of technological and social/organizational factors in skill changes, thus offering explanatory tools to seriously investigate the agencies and affordances of smart and powerful technologies.

This research applies the processual approach to the case of taxi-driving and ride-hailing, representative of the service automation initiated by the platform economy and supported by algorithms and AI. Rather than monotone replacement and generation of old and new skills, the skill changes from taxi-driving to ride-hailing emphasize repositioning and refocusing. For repositioning, workers’ spatiotemporal skills are marginalized but still relevant, emotional and communicative skills become centralized, and digital skills of speculating and anticipating algorithm judgments emerge. More importantly, regarding refocusing, the focus of drivers’ skills shifts from maintaining regular, suitable, and profitable work processes to addressing extra and unpredictable events generated by algorithm judgments. The repositioned and refocused skills have limited utility and transferability in enhancing ride-hailing drivers’ performance and job mobility. They are largely performed fragmentedly and passively due to the opacity and constant update of algorithms which put drivers in extensive information asymmetry.

The processual approach offers a generic but also empirical-based way to understand the changes in work and skills under the impact of Industry 4.0 and 5.0, thus offering a robust foundation of skill-related policies. The approach and the case study underscore the necessity for policy activities that first establish skill standards based on sector-by-sector, in-depth analysis of work practices and organizations with platform firms’ and works’ participation, and enforce such standards by having platform firms incorporate them in algorithmic rules. Second, it is important for policies to enhance the transparency of algorithms in the workplace by (1) having platform firms publicize principles of algorithmic rules and reducing frequencies of algorithm updates, and (2) conducting broadly defined—not-necessarily-technical—algorithm audits with workers’ participation. Third and in the long run, it is essential to promote social recognition of emotional, communicative, and digital skills in service work. No matter how repetitive and routinized, they are central to the everyday work practices of human-machine interaction required by Industry 5.0 and fundamental to the operation of any large-scale smart technological system.

The approach and findings of this research have broader public policy implications for Industry 4.0 and 5.0. They highlight the importance of deliberative policy processes and measures grounded in thorough investigations of marginalized targeted populations affected by emerging technologies. Such policy processes and measures should balance the goals of promoting technological and industrial advancement with ensuring social fairness and inclusiveness, and facilitate skill formations that ultimately benefit the operation of large-scale smart technological systems.

The data in this research is collected from three sources in the context of China. First, multiple rounds of participant observation and semi-structured interviews were conducted in Xi’an, China, from 2018 to 2023, involving over 250 conventional taxi drivers and ride-hailing drivers working for Didi, focusing on their everyday work practices, skill formation, and skill performance. During this period, the ride-hailing giant Didi has assumed a dominant position in the Chinese market and has obtained stable technological and business arrangements. Second, semi-structured interviews with 30 operation analysts and algorithm engineers were conducted in 2024, focusing on the principles and practices of algorithm design and operation. Third, documentary analyses are performed on online and media articles about ride-hailing drivers’ skills and city ride-hailing policies.

09:00
Gender bias in grant allocation shows a decline over time

ABSTRACT. Research question The issue of gender bias in research grant allocation remains on the agenda, as research findings differ over time, between qualitative and quantitative studies, and between small and large studies, and furthermore depend on the design of the studies and on covariates included in the analysis.

Data and methods In a recent project, we conducted eight case studies covering nine different funding instruments in six countries. Some studies are on the funding instrument level and others on the disciplinary panel level. This is an important difference, as grant evaluation and application ranking generally happen at the level of (disciplinary) panels, and the more aggregated studies at the research council level or the instrument suffer from disciplinary heterogeneity. Most of the cases studied are individual career grants, and some others are thematic grant programs. Not all gender differences can be called bias. If differences in grant success are based on differences in merit, in academic performance, these can be seen as legitimate. In that case there is no direct gender bias in grant allocation. Of course, the merit variables can be biased themselves, caused by processes external to the grant allocation process. In this project we focus on the question of direct gender bias, and only in a few cases the existence of indirect bias has been tested for. The correlational studies are complemented with other approaches that are better suited to identify causal relations, such as experiments, mediation analysis, and longitudinal studies. - Study 1 implemented a randomized controlled field experiment in a Spanish Regional Funding Organization. The causal analysis revealed no significant gender effect in grant evaluation, nor was there an interaction effect between the gender of the applicant and the gender of the reviewer. - Study 2 is a correlational analysis of gender bias in the Swedish Research Council SRC. - Study 3 is a correlational study of recent funding instruments of the SRC, the Science Foundation of Ireland (SFI) and the Austrian FWF. - Study 4 analyses the same SRC funding instrument, but now at the panel level, reducing heterogeneity which exists at the instrument level. - Study 5 is a field experiment comparing peer review models of a German funding organization (AvH), among others studying gender disparities in the models. - Study 6 tested whether gender bias occurred in the scores and the grant decisions of a funding instrument of Dutch NWO around 2003. The (non)significant effect of gender was not mediated by performance, suggesting also the absence of indirect bias. Looking predictive validity (do the granted applicants outperform the others in the later career) suggested that with hindsight several very good female applicants should have been funded. So, the lack of predictive validity did have a gender effect. - Study 7 examines the German Emmy Noether Fellowship, showing among others that gender had no significant effect on the grant decision, but age clearly had. - Study 8 replicates the Wenneras and Wold (1997) study claiming that women, to get a similar competence score needed to have an additional three Science or Nature papers. The replication still finds a significant gender effect on the competence score, but it is an order of magnitude smaller than in the original study. Analyzing gender bias in the decisions we found a non-significant advantage for men in getting grants.

Main findings The first finding is that gender bias in review scores not necessarily results in biased grant decisions. Grant decisions show more balanced pattern and less gender bias then the review scores, which is in line with other major studies done in the 2010s. This implies that at the decision-making level, bias in review scores seems to be (partly) corrected. A main question of our project was whether there has been a change in gender bias over time. To increase the empirical base, the results of some other case studies were added to the analysis. What was found is that over a period of several decades, gender bias in favor of men decline, and in the recent period there may even be a small advantage for women. In several cases we also tested for indirect gender bias, by using academic performance variables as mediators. However, generally we did not find such mediation effects. Further research should look for explanations. The found patterns may be the effect of gender equality policies at the funder organizations, but also the effect of the prominent position of gender issues on the public agenda.

Conclusion and discussion The findings suggest that direct gender bias in grant allocation is declining and maybe even disappearing. Initial evidence was found that indirect bias does not seem to be present. These positive trends should be monitored, as it is not guaranteed that these cannot be reversed. The research raises several methodological questions. (i) Some of the funding instruments – especially the thematic grants – have teams of applicants where the ‘gender’ of the applicant is difficult to determine. Different ‘gender-mixes’ occur and one needs to take that into account. (ii) Several of the analyses are probably suffering from heterogeneity, especially where the analyses are done at the level of a funding scheme that includes all fields. More panel level studies are needed (iii) The cases used in the analysis together have an N of about 8000, but several cases are relatively small implying that one can only detect large gender effects, and small gender effects may have been missed. Large-scale studies remain therefore relevant, especially if multi-level designs can be applied to include panel characteristics in the analyses. Data requirements are increasing: bigger and richer data are required and do exist, especially the research funding organizations may have a task here to make those data accessible to the scientific community. (v) Finally, it is needed to extend the set of variables measuring merit, and adequately define and operationalize those criteria that implicitly or explicitly play a role in grant evaluation.

08:30-10:00 Session 13C: Global STI performance
Location: Room 225
08:30
Too Poor To Make a Difference in Science

ABSTRACT. Introduction The pursuit of improved living standards and economic opportunities in low-income countries has been a longstanding focus of scholarly debate (Peters et al., 2008). Central to this discourse is the recognition that technological innovation and active engagement in scientific knowledge production are vital for achieving meaningful and sustainable development (Yunus, 1998; Whitworth et al., 2008). Yet, empirical evidence consistently demonstrates that high-income countries dominate the global science network, while low-income countries remain increasingly marginalized (Schott, 1998; Ribeiro et al., 2018). Research productivity, which reflects a researcher’s capacity to generate knowledge, is widely regarded as a critical driver of international collaboration (Abramo et al., 2017). However, despite evidence showing that researchers in low-income countries are not consistently less productive than their high-income counterparts (Lee and Bozeman, 2005), it appears that a country’s income level plays an even more significant role in shaping its integration into global scientific networks.

Model To address this phenomenon, we construct a theoretical model of country reputation in the global science network using a Cobb-Douglas function: Ri = φi^(α(Qi)) Qi^β ui^ν

where Ri represents a country's research reputation in the scientific network, reflected by the number of collaborations it can attract. φi denotes researcher productivity, while Qi is the quality of the science system, which reflects a country's capacity to invest in human and physical capital and support scientific social activities, and can be proxied by the country's income level. ui captures other unobserved idiosyncrasies, and α, β, and ν are the input elasticities of the determinants. The reputation function illustrates that the quality of the science system Qi and researcher productivity φi are substitutable in building country reputation Ri. Meanwhile, it also indicates that the impact of researcher productivity φi on reputation is moderated by the quality of the science system Qi. Being part of the exponent of researcher productivity φi, the quality of the science system Qi can enhance or constrain the contribution of researcher productivity φi in attracting international collaboration and building country reputation Ri. Therefore, the substitutability between the quality of the science system and researcher productivity is moderated by the quality of the science system. Consequently, we hypothesize that in low-income countries, researcher productivity does not sufficiently compensate for deficiencies in the quality of the scientific system. This means that countries with a low-quality science system face barriers to participating in international collaborations, regardless of their research productivity.

Data and Method We use journal publication information from Scopus and GDP per capita data from the World Bank. Between 2000 and 2022, we identified 1,965,642 publications from 1,671,837 authors of 184 countries in the field of Business and Economics. To identify potential structural breaks in the substitutability between the quality of the science system and research productivity in shaping a country's reputation in science, we utilize the threshold approach by Hansen (1999). The regression model obtained by log-transforming the Cobb-Douglas equation is: DCit+1 = β0 + β1 GDPpcit + β2·I(GDPpcit ≤ γ) CITit + β3·I(GDPpcit > γ) CITit + β Cit + μi + εit

We use degree centrality (DC) to proxy country reputation in the scientific network, with a one-year lead. GDP per capita (GDPpc) is the indicator for the quality of the science system. Aggregated number of citations (CIT) measures research productivity at the country level. Following Hansen (1999), we include an indicator function I. I equals one if GDPpc ≤ γ (income threshold) and zero otherwise. By incorporating another indicator function I(GDPpc > γ), we differentiate the slope coefficients for CIT. β2 applies to countries with GDPpc below γ, while β3 applies to those above this threshold. C represents the size of the science system, calculated by the number of researchers. Additionally, we include a fixed effect μi. The analysis is implemented at the country level.

Results As a result, countries with higher GDP per capita tend to attract more collaboration partners. However, only countries with a GDP per capita higher than the estimated threshold of USD 713 show a positive correlation between the citations of their researchers and the number of international collaboration partners. For countries with a GDP per capita below this threshold, the citation count of their researchers does not affect the number of international collaborators. In essence, while the quality of a science system is generally positively related to a country's scientific reputation, the relationship between researcher productivity and scientific reputation is more complex. When a country's income level exceeds USD 713, the productivity of its researchers becomes positively linked to its scientific reputation. However, this positive effect diminishes beyond a certain productivity level. Conversely, countries with an income level below this threshold struggle to establish their scientific reputation, regardless of their researchers' productivity. In other words, their research productivity cannot compensate for the low quality of their science system.

References: Abramo, G., D’Angelo, A. C., and Murgia, G. (2017). The relationship among research productivity, research collaboration, and their determinants. Journal of Informetrics, 11(4):1016–1030. Hansen, B. E. (1999). Threshold effects in non-dynamic panels: Estimation, testing, and inference. Journal of Econometrics, 93(2):345–368. Lee, S. and Bozeman, B. (2005). The impact of research collaboration on scientific productivity. Social Studies of Science, 35(5):673–702. Peters, D. H., Garg, A., Bloom, G., Walker, D. G., Brieger, W. R., and Hafizur Rahman, M. (2008). Poverty and access to health care in developing countries. Annals of the New York Academy of Sciences, 1136(1):161–171. Ribeiro, L. C., Rapini, M. S., Silva, L. A., and Albuquerque, E. M. (2018). Growth patterns of the network of international collaboration in science. Scientometrics, 114:159–179. Schott, T. (1998). Ties between center and periphery in the scientific world-system: Accumulation of rewards, dominance and self-reliance in the center. Journal of World-Systems Research, pages 112–144. Whitworth, J. A., Kokwaro, G., Kinyanjui, S., Snewin, V. A., Tanner, M., Walport, M., and Sewankambo, N. (2008). Strengthening capacity for health research in Africa. The Lancet, 372(9649):1590–1593. Yunus, M. (1998). Alleviating poverty through technology. Science, 282(5388):409–410.

08:45
Co-evolution of the global research collaboration network and the performance of nations in science and technology

ABSTRACT. Despite extensive research on the relationship between international research collaboration (IRC) and research performance in science and technology (S&T), existing research has mostly examined single or comparative case studies, relatively small samples composed of developed countries, and uni-directional relations between empirical indicators. Although large scale network studies of IRC are becoming more common, 1) drivers of IRC network formation and 2) effects of the IRC network on policy-relevant performance outputs tend to be analyzed separately. Large scale analysis of the reciprocal dynamic relationship between IRC and national performance has yet to be conducted.

This research tests network effects on performance and vice versa simultaneously using a longitudinal co-evolution model on three decades of global S&T network and performance data. We employ the stochastic actor oriented model (SAOM) framework, also known as Siena models, to analyze data on 166 countries from 1993 to 2022. Yearly IRC networks are constructed from Web of Science's XML database. Corresponding national S&T performance data is gathered from Elsevier's fractional field-weighted citation index (FWCI), which disentangles national from internationally attributed citation impact. The models also account for geographic distance, national wealth, population metrics, political governance, and endogenous network processes.

The preliminary results support the hypotheses with positive and significant estimates for both effects. However, geographic distance appears to play a critical role in the transmission of the social effect of performance on the IRC network. Indeed, not controlling for geographic distance renders this effect insignificant in the face of the endogenous network dynamic of preferential attachment. Further analysis will be conducted incorporating different sensitivity tests in addition to tests for disciplinary and temporal heterogeneity.

09:00
Collaboration Patterns and Research Impact: A Comparative Analysis of Nanoparticle Science in Public Research Institutes

ABSTRACT. 1.Introduction and Background Public Research Institutes (PRIs) in South Korea were established in the 1960s and 1970s to drive industrial growth and economic development. Following significant success in the development of industry and economy, academics, policymakers, and the public started to call for the PRIs to perform more leading and creative research beyond the adoption of advanced technologies invented elsewhere (Zastrow, 2020; Kim, 2010). This has been often argued based on the fact that Korean PRIs have yet to achieve notable global recognition, such as a Nobel Prize in sciences, despite the substantial investment in science and technology (e.g. reflected in one of the highest Gross Domestic Expenditures on Research and Development (GERD) globally) (Choi & Kim, 2019). Policymakers have sought to bridge this gap by promoting interdisciplinary and collaborative research as strategies to enhance novelty and impact (Jung, et al., 2021; Carayol & Thi, 2005). However, these efforts have been hindered by the absence of specific guidelines or metrics for fostering and evaluating effective collaborations. This study investigates the collaboration patterns of nanoparticle research groups to identify structural, contextual, and relational factors that drive high-impact research. The primary focus is on Center for Nanoparticle Research group at IBS, leaded by Prof. Taeghwan Hyeon, who is widely regarded as a leader in nanoparticle science and frequently mentioned as a potential Nobel Prize recipient (Clarivate, 2020). Comparative cases include Aleksey Ekimov’s Nobel-winning foundational work on quantum dots at the Vavilov State Optical Institute in Russia and Stefan Hell’s Nobel-recognized research on super-resolution microscopy at the Max Planck Society (MPS) in Germany. These groups, selected for their alignment in research focus, institutional context as a public research institute, and global influence, provide an opportunity to analyze how collaboration practices vary across high-performing public research institutes.

2.Theoretical Framework and Research Objectives The study applies network analysis to investigate how the structure, diversity, and connectivity of research collaboration networks influence knowledge creation, dissemination, and scientific breakthroughs (Uzzi & Spiro, 2005). Research networks have been widely acknowledged as key drivers of innovation, with studies emphasizing that specific structural features—such as centrality (influence within a network), density (frequency of connections), and bridging ties (connections between otherwise disparate groups)—can significantly impact the flow of information and the ability to generate novel ideas (Granovetter, 1973; Burt, 2004). Using network analysis, this study examines how the structural features of research collaboration networks influence knowledge creation and innovation, with a focus on three prominent nanoparticle research groups: IBS, Vavilov, and MPS. While IBS has yet to produce a Nobel laureate, Vavilov and MPS have achieved Nobel-recognized breakthroughs, offering an opportunity to identify distinguishing collaboration patterns and strategies. The objectives are to: • Analyze and compare the intra- and inter-organizational collaboration patterns of these groups. • Identify collaboration types (international, interdisciplinary, industry-academic) associated with impactful publications and innovation. • Investigate how network attributes like centrality, density, and diversity shape research outputs. • Provide actionable policy recommendations for Korean PRIs to enhance their collaboration strategies.

3.Data and Analytical Approach This study selects three prominent research groups in nanoparticle science to examine the role of collaboration strategies within public research institutions. The chosen cases represent diverse outcomes within similar institutional settings, providing a robust framework for comparative analysis. Hyeon’s Center for nanoparticle research group at IBS represents Korea’s aspirations for global recognition and provides a benchmark for evaluating current collaboration practices. Ekimov’s Nobel-winning research on quantum dots at Vavilov demonstrates the impact of long-term, cross-disciplinary collaborations on achieving groundbreaking discoveries. Hell’s work at MPS highlights how strategic international partnerships and access to advanced facilities drive innovation. Bibliometric data of each research group was collected from Scopus and the Web of Science to enable comparison of organizational-level research collaboration. Specific metrics include co-authorship frequency, field-normalized citation impact, and collaboration types (international, interdisciplinary, industry-academic). Social Network Analysis (SNA) will calculate structural properties such as network centrality, while international connectivity and diversity metrics will measure the scope and composition of collaborations. Institutional policies and historical contexts will also be reviewed to understand their influence on research collaboration patterns. By integrating quantitative metrics with qualitative insights, this approach will offer a comprehensive understanding of how collaboration structures drive high-impact research in public research institutes.

4.Expected Contributions and Potential Impact This research is expected to provide actionable insights for improving the effectiveness of research collaborations at Korean PRIs. By identifying specific collaboration patterns and network characteristics associated with high-impact research, the findings can inform the design of policies and strategies to enhance global scientific recognition. Additionally, this study will offer broader implications for public research institutes worldwide by highlighting useful practices in fostering innovation and research novelty. The results aim to bridge the gap between Korea’s high investment in science and its pursuit of Nobel-level achievements, contributing to its transformation into a global research leader. By expanding understanding of collaboration networks across diverse institutional contexts, this research aspires to serve as a blueprint for building impactful and innovative research ecosystems.

5.Bibliography Burt, R. S., 2004. Structural Holes and Good Ideas. American Journal of Sociology, 110(2), pp. 349-399. Carayol, N. & Thi, T. U. N., 2005. Why do academic scientists engage in interdisciplinary research?. Research Evaluation, 14(1),pp.70-79. Choi, S.-M. & Kim, J.-H., 2019. Parliament blames "lack of 'effort' in science and technology" for absence of Nobel Prize(In Korean). [Online]Available at:https://www.news1.kr/articles/?3741395[Accessed 20 8 2021]. Clarivate, 2020. Citation Laureates 2020: The giants of research. [Online] Available at: https://clarivate.com/academia-government/blog/citation-laureates-2020-the-giants-of-research/[Accessed 20 11 2024]. Granovetter, M. S., 1973. The Strength of Weak Ties. American Journal of Sociology, 78(6),pp.1360-1380. Jung, Y., Kim, E. & Kim, W., 2021. The scientific and technological interdisciplinary research of government research institutes: network analysis of the innovation cluster in South Korea. Policy Studies,42(2),pp.132-151. Kim, S.-R., 2010. Hwang Chang-gyu, Head of R&D Strategic Planning Team said "PRIs Should Turn into Industry-led 'first-mover'"(In Korean). [Online]Available at:http://www.dt.co.kr/contents.html?article_no=2010042202010351614002[Accessed 25 1 2022]. Uzzi, B. & Spiro, J., 2005. Collaboration and Creativity: The Small World Problem. American Journal of Sociology,111(2),pp.47-504. Zastrow, M., 2020. Boosting South Korea’s basic research.[Online] Available at:https://www.nature.com/articles/d41586-020-01464-9[Accessed 19 6 2021]

08:30-10:00 Session 13D: Transition Policy
Chair:
Location: Room 331
08:30
How to implement mission-oriented innovation policy – The case of the German Energy Research Program

ABSTRACT. Research questions The urgency to transform Germany's energy system has increased due to the Russian war against Ukraine, the tightening of emission reduction, and higher renewable energy capacity objectives. Therefore, the German Federal Ministry for Economic Affairs and Climate Action initiated a reconceptualization process of its Energy Research Program (ERP) in 2023 to address the urgency of the transformation with a more mission-oriented policy approach. The ERP consists of various measures for direct R&D project funding, project funding for living labs, and institutional funding for energy-related public research facilities. Considering this practitioner’s perspective on mission-oriented innovation policy, this study aims to analyze the implementation process, which considers the relevant actors for the ERP and the stakeholders in the form of firms, research facilities, and others linked to the ERP activities via a document analysis and interviews (cf.,Haddad, Nakić, Bergek, & Hellsmark, 2022; Schmidt, 2018). To understand this process for the case of the project funding activities of the German ERP, this study formulated the following research questions: • How can missions be implemented according to the goals of concrete policies? • How can project funding instruments be operationalized and monitored based on these mission-oriented policy goals? These research questions are relevant for practical discussions on mission-oriented innovation policymaking and implementation. From a practical viewpoint, it needs to be discussed how relevant knowledge of the different energy fields and the relevant policy actors within the ministry and for implementing funding calls can be included. Thus, there is a need for a governance concept to implement a successful mission-oriented innovation policy (Ghazinoory, Ranjbar, & Saheb, 2024). Mission-orientation represents a paradigm shift in innovation policymaking, which Daimer et al. (2012) describe as a “normative turn.” Market and system failures focus on funding innovation activities such as basic research with external effects or supporting learning capabilities and interface management to guarantee knowledge flows and interactions (Aghion, David, & Foray, 2009; Daimer et al., 2012; Schmidt, 2018). Wittmann et al. (2024) offer a comprehensive guide to formulating effective missions within policy frameworks. It highlights the importance of precise mission formulation, emphasizing that the success of mission-oriented policies hinges on clear, realistic, and context-specific goals. Though they provide a typology for instruments for different types of missions depending on the breadth and duration of the negotiation process (Wittmann et al., 2024, p. 26), The operationalization, hence implementation, of the missions remains somewhat unclear, as this paper focuses on mission formulation and goal-derivation, while the governance aspect is less thoroughly addressed. In general, as Ghazinoory et al. (2024) show in their literature review, more research activities focus on governance activities for new mission-oriented or transformative STI policies. Though some insights on how specific types of innovations require specific governance modes and what kinds of roles the state can exert with its innovation policy (Borrás & Edler, 2020; Ghazinoory et al., 2024; Reichardt, Rogge, & Negro, 2017), the mission-orientation and transformation research requires more insights into the governance challenges from the practitioners’ perspectives (Haddad et al., 2022). According to Hufty (2011) governance is a formal and informal, vertical and horizontal process of interface and decision-making among the actors involved in a joint problem, which can result in “the creation, reinforcement, or reproduction of social norms and institutions”. Governance is neither prescriptive nor normative nor presupposes vertical authority and regulatory power. This understanding is commonly used to analyze the governance of innovation policy (e.g.,Fagerberg & Hutschenreiter, 2020). A program represents an aligned bundle of instruments following a specified objective. On the contrary, aligning instruments or programs with government missions and other related policies appears to be just one task, which mission-oriented innovation policies require. To account for the interviewees' activities, we used the dynamic capabilities approach for public sector agencies in the innovation policy by Spano et al. (2023) and Kattel (2024).

Methodology The suitable research design for this purpose is a qualitative case study (Yin, 2018, p. 123) grounded on in-depth interviews, aligning with the methodological framework and a document analysis (Miles et al. 2014; Saldaña 2013). Our document analysis examined public and non-public documents, according to Bowen (2009). We used different types of documents, such as public statements by the BMWK.

Findings Two-thirds of our interviews have already been conducted and coded. However, the process couldn’t be finished because the ministry has not implemented the governance framework according to the 8th ERP. The delays resulted from external shocks, such as the shift away from Russian natural gas and the political disruption of the then-coalition government. As of now, the mission orientation demands a breaking of the technological silos in the operating agencies and alignment of processes directed to the missions in the ERP. This ensures a consistent derivation of program goals, subgoals, goals of the measure, and KPIs. Vice versa, this consistency in goal derivation enables the program to monitor, steer, and control alongside the missions. Generally, the ministerial actors faced challenges realigning the traditionally technologically categorized funding calls to the new missions since the entire research community is used to the former structure. Thus, their structures and the project operator’s structure follow the “old” scheme. Thus, the first funding call mirrored this structure and is not aligned with the missions in the ERP. Nevertheless, there is political support for a mission-oriented ERP, and one cornerstone of this character, the board, was installed in October 2024, comprising of scientists, representatives from industry associations, and other stakeholder groups. This board represents the reflexivity regarding the feedback mechanism and the external coordination of the ERP with, for example, the industry. Moreover, there has been uncertainty about monitoring, steering, and controlling the program with suitable KPIs. Still, the process of selecting KPIs, which accounts for the character of a mission-oriented learning ERP, is incomplete. Additionally, there are pilot processes within the operating agency of the ministry to reshape processes and break technological silos. They will be concluded in spring 2025; therefore, the final analysis can incorporate these findings for the conference in May 2025.

08:45
Can traditional STI instruments pursue transitional goals? The Swedish Strategic innovation Programmes

ABSTRACT. There is international consensus that it is urgent to address the ‘societal challenges’ – the climate, but also wider issues of resources, the environment and sustainability – that involve ‘wicked’ problems whose solutions will be sociotechnical rather than purely technical in character. The science, technology and innovation (STI) research community has been instrumental in identifying these challenges, whose resolution requires ‘systems innovation’ and implementation across much more of society than is traditionally involved in STI. Policy responses are caught in an ‘STI trap,’ largely confined to the STI policy community, its organisations and instruments, and failing adequately to coordinate with and activate other important parts of society. Despite some governance rigidities, Sweden’s well-performing and (in policy) innovative innovation system provides a natural laboratory for exploring this. We build on our history of post-War STI policy in Sweden and our analysis of an attempt to make STI policy more transitional through ‘Strategic Innovation Programmes’ (SIPs), identifying opportunities and limits to modifying even the most modern innovation funding instruments for transitional purposes, and conclude with options for breaking some of these barriers.

Context Sweden established one of the world’s first innovation agencies (STU, now Vinnova) in 1968 and has since been among the leading countries in developing new innovation policies. However, like other countries, it makes STI policy within specific historic and organisational path-dependencies. History has produced a decentralised and fragmented STI policy system, without limited whole-of-government coordination, with many more actors than in most countries, and a long tradition of hostility between the education and industry ministries. Our exploration of developments in Swedish STI policy since the 1960s suggested that the prevailing organisations and governance would not be able to adapt adequately to the more coherent systemic approach needed to tackle societal challenges. Swedish efforts to develop and implement transitional policies therefore offer policy lessons but, as elsewhere, these need to be interpreted in the light of the idiosyncrasies of the national innovation system.

The Strategic Innovation Programmes are 17 substantial public-private partnerships aiming to design and implement strategic innovation agendas over 12 years. Launched in four tranches, their original goal was to support industrial innovation, but the government quickly extended this to include sustainable development. Hence a central interest in studying them is to understand how malleable such a funding instrument is to taking on more transitional purposes. Each SIP has been evaluated after 3, 6, and 9 years, to provide a basis for refunding decisions. We led the 6-year evaluations, adding a policy learning component in which we separately analysed the SIPs’ performance using a framework built on the transitions literature. This combined technological innovation system (TIS) functions, whose origins are in explaining the creation of new TIS and which are therefore especially relevant to strategic niche management, with transition management functions drawn from the wider literature. The SIPs were all to varying degrees successful in their industrial innovation and competitiveness goals. We were able to show that, in relation to transition, 1o of the SIPs acted as ‘reinforcers’, incrementally improving innovation activities in TIS dominated by large incumbents, or in two cases in SME-dominated branches; 5 were ‘transformers’ focused on sustainability transitions in areas like transport infrastructure and circular economy; one was a mixed case, starting as a reinforcer and evolving into a transformer; and one was a TIS-builder, aiming to establish supply chains and an ecosystem in graphene applications. The programme as a whole was originally dominated by reinforcers, which tended to have roots in mature industrial branches with large incumbent companies, but the balance shifted towards transformers through time. The transformers and the TIS-builder made more use of the TIS-functions than the older SIPs. Rather than obstructing change as the transitions literature would suggest, many of the large incumbents tried to embrace it, in the hope of becoming leaders in new markets that could replace their traditional ones. Based on the evidence from the SIPs, the functions we used sorted themselves into three categories: • Traditional innovation functions such as knowledge development and diffusion that can easily be done using the SIP instrument • Other functions such as establishing directionality, reflexivity, demand articulation and setting priorities that were permissible within traditional innovation policy instruments but which are not traditionally encouraged in innovation policy • Functions that are beyond the powers of members of the STI community, such as new market creation and exnovation The SIP example – and others internationally – therefore suggests that while there is potential for some traditional innovation policy instruments to play roles in transitions policy, they are caught in the STI trap and tend to lack the scope to perform all the functions needed to trigger transformative systems innovation. Subsequent developments in Sweden – notably the new challenge-based Impact Innovation programme – have been influenced by the SIPs, but are nonetheless constrained by the same limits to power and practices.

Extending the scope of transition policy beyond the STI community will require coordination with other authorities outside the STI sphere. International examples include multi-ministry initiatives, public-private partnerships, all-of-government initiatives governed at a high level in government such as a cabinet office, or free-standing platforms outside, but answering to, government. Unless such governance mechanisms are used, which connect and coordinate the STI sphere with other parts of government and society, interventions intended to be transitional will be limited in scope. Alternatively, they will be possible at limited scale, as in the many smart and sustainable city-level interventions, which are enabled by the breadth of authority that cities can muster internally. While inter-agency cooperation within STI is well established as a practice in Sweden, other mechanisms have been little used. It appears that the fragmented, rather bottom-up style of governance that has served Sweden well during the period of traditional innovation policy since the late 1960s, has become an obstacle in a period when more transformational changes are needed – tending to confirm our earlier hypothesis that structural reforms would be needed in the Swedish STI funding system in order to achieve transitions.

09:00
Transforming STI policy for sociotechnical transitions: The OECD Agenda for Transformative STI Policies

ABSTRACT. Economies and societies need to transform to meet multiple challenges, including climate change, biodiversity loss, disruptive technologies, and growing inequalities. Science, technology and innovation (STI) can make essential contributions to these transformations, but governments may need to be more ambitious and act with greater urgency in their STI policies to meet these challenges. Sustained investments and greater directionality in research and innovation activities are needed, and these should coincide with a reappraisal of STI systems and STI policies to ensure they are “fit-for-purpose” to contribute to transformative change agendas.

The OECD Agenda for Transformative STI Policies (TrA) provides high-level guidance to STI policymakers to help them formulate and implement reforms that can accelerate and scale-up positive transformations in the face of mounting global challenges. The TrA was published at the OECD Committee for Scientific and Technological Policy (CSTP) meeting at Ministerial level in April 2024 (see https://doi.org/10.1787/ba2aaf7b-en). It was a prominent component of the meeting and its main messages were subsequently incorporated into the meeting’s “Declaration on Transformative Science, Technology and Innovation Policies for a Sustainable and Inclusive Future”, which was signed by 44 countries and the European Union.

The TrA proposes three transformative goals for STI to pursue: (i) Advance sustainability transitions that mitigate and adapt to a legacy of unsustainable development; (ii) Promote inclusive socio-economic renewal that emphasises representation, diversity and equity; and (iii) Foster resilience and security against potential risks and uncertainties.

There are synergies and trade-offs between these transformative goals, particularly in the context of ongoing political debates that sometimes pitch economic competitiveness goals against sustainability transitions and energy security, for example. And there are likely multiple pathways for reorienting STI policies and systems to meet these goals. The TrA outlines a common set of STI ‘policy orientations’ for governments to implement to help drive transformative change, namely the need to: • Direct STI policy to accelerate transformative change • Embrace values in STI policies that align with achieving the transformative goals • Accelerate both the emergence and diffusion of innovations for transformative change • Promote the phase out of technologies and related practices that contribute to global problems • Implement systemic and co-ordinated STI policy responses to global challenges • Instil greater agility and experimentation in STI policy

Many of the necessary reforms are familiar to the STI policy community, but barriers remain, for example, in scaling-up and institutionalising policy innovations. Moreover, transformative change is often associated with radical reforms, but small incremental changes may cause a system to shift qualitatively where it is close to a tipping point. This perspective lies at the heart of the TrA and acknowledges that bringing about a fundamental transformational change in STI will require changes across many fronts, adapting as lessons are learnt on what does and does not work.

Accordingly, in translating these policy orientations into concrete actions, the TrA provides high-level guidance for ten STI policy areas where there are opportunities to facilitate the transformation of STI and STI policy systems. These policy areas cover all aspects of STI policy and governance, including the following issues: 1. How to direct public STI funding and private financing to support transformative change? 2. How to gear research and technology infrastructures towards transformations? 3. How to leverage enabling technologies to advance transformations? 4. How to nurture the skills and capabilities required for STI-enabled transformation? 5. How to ensure structural and market conditions are conducive to transformation? 6. How to develop and use strategic intelligence to guide transformation? 7. How to engage society in STI to further transformative change? 8. How to deepen STI co-operation between innovation system actors for transformation? 9. How to promote cross-government coherence to help coordinate STI-enabled transformations? 10. How to leverage international STI co-ordination to support transformation for the public good?

To complement this high-level guidance, the OECD is also developing policy toolkits, supporting peer learning on specific policy challenges, and providing country-specific support services. These activities are being mainstreamed and embedded across the OECD’s STI work programme where they can be co-produced with policymakers and experts.

The presentation will (i) outline the TrA, its rationales and its positioning vis-à-vis other OECD STI activities; and (ii) highlight some of the challenges in formulating and rolling out the TrA, particularly with regards to its wide breadth, the tensions and trade-offs between some of its elements, and the complexity, uncertainty and long-term nature of transformations. The presentation will also describe some of the policy guidance and toolkits currently under development and the results of their testing in country settings.

*Current role: The authors are policy analysts working in the OECD who developed the TrA and are responsible for its promotion and rollout.

09:15
Organising mission-oriented innovation policy around systems of use innovation and platforms

ABSTRACT. This paper explores how systems of use innovation, mission-oriented innovation policy (MOIP), and platform organization theory can address societal challenges through effective policy implementation. The specific focus is the green transition, examining the steel and aluminum industries, which together contribute approximately 10% of global CO₂ emissions. The research addresses the following key questions: 1. How can effective ‘mission-oriented innovation policies’ (MOIP) be designed and implemented? 2. In what ways can bottom-up ‘systems of use innovation’ contribute to industrial transition processes? 3. How can a model of effective mission-oriented innovation policy be developed by integrating elements of ‘platform theory’ and systems of use innovation theory?

By addressing these questions, the study aims to contribute to a deeper understanding of policy design and delivery for sustainable industrial transformation. The research methodology encompasses empirical data collection through personal interviews utilizing semi-structured schedules, complemented by data extraction from industry documentation and video presentations. Additionally, an analytical review of pertinent research and policy documents is conducted. Concepts derived from research literature inform and enhance the analysis.

Mission-oriented policymaking typically needs to unify various governmental policy actors to address critical societal challenges. A prominent example is the global push for zero CO2 emission industries, which inevitably involves efforts by multiple government ministries and agencies. However, such efforts frequently lack a nuanced understanding of the needs and incentives of end users and innovators—the key actors whose behavior these policies aim to influence. Without integrating insights from these stakeholders, the complex interaction effects of policy changes remain poorly understood, undermining the overall effectiveness of mission-oriented strategies.

Presentes literature analysis explores the current landscape of mission-oriented innovation policymaking and its limitations, particularly in addressing the systems of use for innovation—an essential framework for achieving successful transitions. The ‘green steel’ initiative serves as a case study to illustrate the gaps in current approaches, demonstrating how end users and systems of use innovation are often inadequately considered. Additionally, the paper examines the adversarial dynamics that can arise between governmental regulators and industries affected by mission-oriented policies. Differing timelines, priorities, and incentives among stakeholders often hinder collaborative efforts, despite the shared objective of societal progress. While governments typically focus mission-oriented initiatives on grand challenges, this paper raises the question of whether a similar approach can effectively address smaller, yet impactful, innovation challenges by fostering more integrated policymaking across silos.

In the context of climate change—arguably the defining challenge of our time—these issues are particularly pressing. Unchecked climate change could cost the global economy USD 178 trillion over the next 50 years, equating to a 7.6% reduction in global GDP by 2070. Transitioning to a sustainable economy requires urgent action, including a decisive shift away from fossil fuels. This paper analyses MOIP and systems of use innovation alongside the platform model, as interconnected tools to drive the green transition. While MOIP embodies top-down state intervention to steer societal transformations systems of use innovation offers a complementary bottom-up approach, fostering solutions at individual, enterprise, and value network levels. The platform model is proposed as a potential framework for organizing mission-oriented initiatives. Through the integration of these perspectives, this paper offers a comprehensive analysis of mission-oriented innovation policy, systems of use, and the platform model as vital components for addressing the green transition challenge. The findings aim to guide policymakers in designing more effective, inclusive, and sustainable strategies for achieving societal transitions.

The analysis uncovered several key findings. It highlights systems-of-use innovation as a critical concept in industrial decarbonization, representing the primary process that must transition to carbon-free operation. The system-of-use owner plays a pivotal role in steering the core process transition. Beyond core industrial processes, such as steel production, the goal of the green transition is to ensure that the entire value network is carbon-free. Moreover, downstream activities, including processing steel into finished products, must also eliminate carbon emissions. A successful green transition in systems of use requires diverse inputs, such as new technologies and processes, fossil-free electricity, hydrogen and iron ore pellets, upgraded infrastructure, enhanced knowledge and skills, adaptive regulations, financing, and captive markets. No single government body or financial institution can address all these needs, as responsibilities are distributed across numerous public and private sector organizations. This often leads to significant coordination challenges.

Platform organization theory provides valuable insights into overcoming fragmentation and coordination issues, particularly within the framework of mission-oriented innovation policy. A platform’s core functions can be defined and crucially they can be supplemented by a variety of optional elements essential for transitioning systems of use to carbon-free operations. In this way platforms can bring together critical resources, regulatory powers, and expertise, enabling multi-stakeholder collaboration essential for addressing complex challenges like de-carbonising of a particular industry. Platforms create shared spaces for interaction, communication, coordination, and goal alignment among diverse actors such as government agencies, research institutions, private companies, and non-profits. By leveraging network effects, platforms accelerate the diffusion of innovations. Their strength in data collection and analysis provides insights that inform policy decisions and innovation strategies, ensuring adaptability to stakeholder needs.

Despite the proliferation of policy-related platforms, a concrete link to system-of-use innovation and associated problem-solving is often absent. Addressing this gap could significantly enhance the effectiveness of platforms in supporting the green transition. By integrating platform organization principles, stakeholders can overcome coordination challenges and drive the systemic innovations necessary for a sustainable, carbon-free future.

08:30-10:00 Session 13E: Risk and Governance of Emerging Technologies
Location: Room 235
08:30
Addressing the Paradoxical Nature of Emerging Technologies in Transformative Policies

ABSTRACT. Background Governments struggle to deal with technological changes in ways that promote socioeconomic gains while preventing harm. However, developing policies for emerging technologies is complicated as they can simultaneously be tools to solve problems and generate new societal challenges. This duality reveals a paradox in emerging technologies. Embracing this paradox requires multi-faceted interventions to address uncertainty and the co-evolutionary dynamics between policy, technology, and society (Edmondson et al., 2019; Haddad et al., 2022; Pfotenhauer & Jasanoff, 2017).

Prior work has considered technological change as an exogenous element in policymaking (Edmondson et al., 2019; Rogge & Reichardt, 2016). Thus, existing frameworks seek to cope with socio-technical change by minimizing contradictions and conflicts in policy mixes (Edmondson et al., 2019; Forster & Stokke, 1999; Haddad et al., 2022). We depart from prior literature by analyzing contradictions within emerging technologies policy mixes, highlighting how maximizing coherence and consistency may not always be possible or desirable.

Drawing from paradox theory (Brunswicker & Schecter, 2019; Smith & Lewis, 2011), transformative innovation (Haddad et al., 2022; Schot & Steinmueller, 2018), and policy mixes (Edmondson et al., 2019; Rogge & Reichardt, 2016), we develop a framework that illustrates how policies interact with emerging technologies. We contribute by extending frameworks for policy mixes by acknowledging the paradoxical nature of emerging technologies and providing insights about the mechanisms policymakers use to navigate the lack of coherence and consistency within and across policies.

Theoretical Framework Emerging technologies can simultaneously be a tool to solve societal problems and generate new societal challenges. Consider, for example, artificial intelligence (AI). As a tool, AI can shorten medical imaging times, reduce energy consumption, and increase efficiency in medical care (Doo et al., 2024). Stakeholders in healthcare can “pull” on the development of AI, influencing policies related to both AI and healthcare. Conversely, using AI in healthcare raises new challenges, such as data privacy issues, biases in algorithm training, and ethical considerations (Acemoglu, 2021; Bottomley & Thaldar, 2023). Challenges will mobilize stakeholders to ”push” for policies to deal with them, such as new regulations.

As a result of the pull and push forces, dedicated policies are created for emerging technologies, which become part of and interact with other related policies. Policymakers must coordinate between different policy mixes to address the tool/challenge tension of emerging technologies (Jarzabkowski et al., 2022; Stone, 2011). Following our example, more than 60 countries have enacted AI policies (OECD.AI, 2021) comprised of dedicated instruments (e.g., scientific funding, law reforms) and links to related policies (e.g., privacy regulation and healthcare strategies).

Elements within policy mixes are not necessarily coherent or consistent (Rogge & Reichardt, 2016). For example, AI strategies often have regulations to prevent use or increase the cost of high-risk applications alongside funding and incentives in the same areas (e.g., healthcare). The coexistence of contradictory elements within the policy mix is not necessarily something policymakers want to eliminate; it can result from a purposefully crafted strategy to deal with a complex problem with conflicting stakeholders.

When policymakers “walk through” paradoxical tensions, they confront them through iterative responses of splitting and integration, which can bring short-term performance alongside long-term sustainability (Smith & Lewis, 2011). For example, policymakers use policy strategies such as discursive techniques, shifting temporal priorities, etc. (Jarzabkowski et al., 2022; Smith & Lewis, 2011; Stone, 2011). The latter is only possible when policymakers have a “paradox mindset” that allows them to strive amid tensions (Miron-Spektor, Ingram, Keller, Smith, & Lewis, 2018). This dynamic coordination of the policy mix system will provide directionality to the technology and, in turn, influence its evolution. For example, the Chilean AI strategy is firmly committed to using AI for climate change through initiatives such as using AI to prevent environmental law violations (MinCTCI, 2021). Its updated version has more specific actions, such as creating focused, dedicated research funding and strengthening data relevant to using AI to fight climate change (MinCTCI, 2024). However, it is ambiguous when dealing with AI’s impact on the environment, highlighting its relevance but not having a strong push that could generate amity in the private sector.

Discussion Acknowledging the paradoxical nature of emerging technologies is crucial for policymakers. It helps them to navigate tensions and contradictions in policy mixes rather than striving for coherence. Further, embracing emergent technologies’ tool/challenge duality will influence technology policy to favor their priorities. Socio-technical systems reflect social and environmental needs (Diercks et al., 2019; Schot & Steinmueller, 2018), influencing expectations about technologies’ role in reaching desired futures (Fagerberg, 2018). In our framework, technological change is not “outside” the policy mix, as in prior work, becoming the core of a system of policies and strategies.

How policymakers navigate the system's paradoxes will influence how technology evolves. The dynamic coordination of policy mixes influences elements across multiple dimensions, such as institutional arrangements, funding instruments, public opinion, and political priorities. The resulting assemblage will drive technological evolution, influencing how it varies and which technologies and uses are acceptable (Grodal et al., 2023).

Policymakers do not necessarily strive for coherence and consistency, and how they design and implement mechanisms to generate a dynamic equilibrium of the paradoxical system is not trivial. For example, using experimental approaches to deal with uncertainty and conflicting interests (e.g., living laboratories and sandboxes) requires adequate institutional arrangements, resources, and skills often unavailable in the public sector. Other mechanisms, such as discursive techniques to reconcile conflicting stakeholders and technological paths, require policymakers to have a “paradox mindset” and political abilities to craft the right discourses for different conflicting elements. Moreover, managing short- and long-term horizons is not always possible when changing government coalitions makes policy continuity challenging.

We provide policy implications for crafting policy mixes that acknowledge the paradoxical nature of technologies. Our work can support how governments prepare themselves with the right assets to develop emerging technology policies. Moreover, our work serves as an analytical lens for policymakers to analyze the policy ecosystem and consciously strategize how to navigate paradoxes to maximize socioeconomic benefits and prevent harm from technological change.

08:45
Proactive Approach to Sociotechnical Transitions: Linking Technology Evolution and Barrier Resolution

ABSTRACT. Sociotechnical transitions are often framed as consequences of technological advancement, with pervasive and broadly applicable technologies serving as the catalysts for these transformations. These technologies, due to their configuration, undergo an evolutionary process where they agglomerate capabilities, gradually acquiring diverse functionalities before becoming the established technology in a sociotechnical regime. Concurrently, various barriers – technical, social, cultural, or economic– impede realization of envisioned sociotechnical system transitions. This study posits that the sequence of barrier resolution and the process of technological capability agglomeration are interconnected, presenting an opportunity to address transition barriers proactively during technology development.

Using a narrative review of extant literature from diverse sociotechnical transition cases, this research explores the alignment between these sequences. The research yields three insights: i.) identification of the sequential nature of barriers and technological capabilities; ii.) delineation of barriers according to the types of capabilities (technical, social, economic, cultural) required to address them; and iii.) development of a pathway for proactively addressing these barriers through strategic acquisition of capabilities during technology development.

The significance of this study lies in its effort to bridge the gap between resolution of transition barriers and technological evolution. By providing stakeholders with a framework to proactively facilitate transitions through informed decision-making during the technology development process, we offer a nuanced perspective to proactively manage sociotechnical transition. This research contributes to the broader understanding of sociotechnical systems by highlighting the interplay between technological evolution and barrier mitigation. It provides valuable insights for policymakers, technologists, and other stakeholders involved in shaping future sociotechnical systems, enabling more holistic and proactive approaches to facilitate desired transitions.

09:00
Technology readiness level mapping as a basis for governance of emerging technologies

ABSTRACT. The novelty, fast growth and high impact that define emerging technologies demand the attention of society and so of policy makers (Rotolo, Hicks & Martin, 2015). Yet the uncertainty and ambiguity also characteristic of emerging technologies challenge the formation of appropriate sociotechnical governance responses (Collingridge, 1980), not least because many governance options are available (Borrás & Edler, 2020). In the late 2010s, the seemingly imminent introduction of autonomous vehicles (AV) onto public roads created considerable uncertainty within well-established patterns of governance of state transportation systems. To prepare for something whose final form and arrival time were unknown, states needed to understand how this technology would affect the efficient movement of people and freight and when AVs would appear on their roads. To meet this challenge, politicians and agencies in many states commissioned advice. The advisors drew from and/or participated in the then widespread industry, media and academic conversation around this emerging technology. Prominent in this conversation were predictions of the arrival time of autonomous vehicles. The public might be interested if this were more than a theoretical possibility. To guide their decision making, investors also wanted to know how close the technology was to being realized. Entrepreneurs who needed investments were happy to oblige by predicting their launch of automotive autopilots. In a crowded space of voices competing for attention, optimism reigned supreme. This oft repeated dynamic has been formalized in the Gartner Hype Cycle, a diagram released every year on which AVs appeared between 2013 and 2017. The focus on predicting arrival time was the approach adopted in the reports states commissioned to procure advice. In this paper we argue that such predictions, precisely because they are motivated by commercial considerations cannot be relied upon. Instead we leverage an approach used by companies and government agencies managing innovation that we suggest could provide a better basis for assessing emerging technologies and developing appropriate policy responses. We argue that to narrow down policy options, decision makers should examine where the technology is now. Hypothesizing that earlier and later stages of technology development require different policy treatments, we operationalize this idea by creating a mapping of technology readiness levels (TRL) to policy options. We examine the case of US state policymaking for autonomous vehicles, specifically the recommendations made by policy advisors and map them to TRL levels. TRLs NASA invented the TRL framework in the 1970s as a tool to aid in managing development projects (Mankins, 1995). TRLs offer a way to grade a development project on how much progress has been made and how much remains to be done. Companies incorporate TRL assessments into periodic project reviews to decide whether funding should continue or end (Olechowski, Eppinger & Joglekar, 2015). TRLs have become something of a lingua franca among engineers and have been used in European funding programs (Olechowski, Eppinger & Joglekar, 2015). Methods Our analysis began with a search for reports on autonomous vehicles commissioned by states. Our search terms combined the state name, report, DOT, AV, autonomous vehicle, driverless, and CAV. Requiring all the terms did not work, but interchanging them brought good results. Terms such as DOT followed by state name, CAV, and "report" were the most effective. Of course, our method suffered the same challenges relevant to all work focusing on the grey literature. Namely, we likely missed some reports because we depend on reports being posted online (Grimmer, Roberts, and Stewart, 2022) and are also vulnerable to the "found data" problem - not knowing what the universe of reports looks like and assuming the found reports are representative. We found 75 PDFs of state AV reports and extracted their 791 recommendations. We then inductively coded each recommendation into one or more of 25 categories: connected, consultant, data collection, data-privacy, definitions, DoT, drivers, education, fund, get out of the way, industry, infrastructure, insurance-legalization, insurance-liability test, legislation, misc, monitor-research, partner, pilot, plan, platooning, road maintenance, task force, test. Findings To align recommendations with the TRL framework, we first simplified the 9 TRL levels into 4: research, technology development, testing, and deployment. We then classified each recommendation as appropriate for one of the four levels. Table 1 shows the four levels and their correspondence to TRLs as well as the alignment between recommendation topic categories and the four levels. [not possible to display table in this format] The thought behind this is that in the early stages of an emerging technology, when laboratory research is the focus, appropriate policy responses are watchful, monitoring progress and commissioning research on possible implications as well as doing nothing and ensuring that those under your jurisdiction also do nothing. As the innovation emerges into the development phase, when prototypes might be displayed, it may be time to commission a task force to assess the situation and develop recommendations, define the issues at stake and educate the public. When the technology is ready for real world testing, you might want to attract pilot projects to position your jurisdiction at the forefront, foster economic development and inform your public about the future potential. Finally, when the technology is deployed, i.e. available for purchase, you need to be ready, in this case with rules on driver licenses, insurance arrangements, data privacy restrictions and even changes in road maintenance. When looking at the number of recommendations by category over time, we find a pattern predicted by the hype cycle, Figure 2. That is, the earliest reports contain the largest share of recommendations at TRLPM 4, almost half. After that, recommendations become more conservative, with a larger share of TRLPM 1 recommendations decreasing over time as the share of level 3 increases. This suggests that the TRL mapping is a productive way of analyzing policy for emerging technology. In addition, TRL assessment is a tool that policy makers can use as a basis for policy making for emerging technologies.

09:15
Impact of Government Facial Recognition Technology Sourcing on Facial Data Sharing: An Experimental Study in Digital Tax Services

ABSTRACT. Since 2021, US taxpayers have been required to undergo enhanced identity verification through facial recognition technology (FRT) with the Internal Revenue Service (IRS). The initial step of this FRT-enabled verification asks individuals to take and upload a selfie and a copy of personal identification documents to set up taxpayers' online tax accounts through an FRT system offered by ID.me, a commercial company specializing in FRT verification.

The collaboration between the IRS and ID.me aimed to reduce fraudulent accounts and enhance taxpayers' data security (Collier, 2022 ). However, skepticism arises about the IRS outsourcing facial recognition services to a profit-driven tech firm, especially regarding its data processing, use, and protection practices (Metz, 2022 ). In February 2022, some US Congress members voiced concerns about ID.me (Alms, 2022). Moreover, integrating FRT into online tax services raises fears of perpetuating systemic bias and discrimination. Concerns over FRT's racial bias and its impact on social justice have been raised. Recent studies indicate higher error rates for identifying people of color through advanced FRT algorithms (Buolamwini, 2022 ). Facing such firestorms, the IRS officially announced the drop in FRT in February 2022 (IRS , 2022) and later announced the transition to a government-procured FRT system, Login.gov (Heckman, 2022 ). However, as of May 2024, ID.me remains the sole login means for anyone to access IRS online accounts despite concerns and proposed alternatives (Riley, 2023).

While governments have been adopting various new AI technologies in operations and delivering public services over recent years, public perceptions of government AI adoption, like FRT, play a crucial role in enhancing democratic accountability, public oversight, and digital transformations in public services (Brewer et al., 2023; Schiff et al., 2023). With an increasing focus on comprehending public perceptions of AI, recent studies explore perceptions of human and automation interactions in program participation (Miller et al., 2022), attitudes toward automated decision-making (Miller & Keiser, 2021), perceptions of justice regarding human and AI decision-making in school appointments (Alon-Barkat & Busuioc, 2023), and citizens' views on fairness and acceptance of rule-driven versus algorithmic decision-making (Wang et al., 2023). Nevertheless, little public administration and policy work has explored public attitudes about governments contracting biometric functions for online public services.

Governments contracting out biometric services have a long history in the US. FRT for border control and security between governments and the biometric industry has been an established practice (Norval & Prasopoulou, 2017). However, FRT in everyday digital government services, like the case in providing digital tax services, is still arising. It is unknown how FRT and other biometrics influence public service provisions and how the trend continues to unfold in the future. Ni and Bretschneider (2007) argue that contracting citizen-based information systems that involve critical government information and personal records, including tax and service data, requires special attention to privacy and security. Since the agents to whom individuals give their facial information may change, how the public or taxpayers adjust their risk perceptions and express their willingness is of both theoretical and practical interest.

Prior research on data sharing tends to agree that individual's willingness to share personal information depends upon expected benefits, types of information being shared, and agents who collect, manage, and use their data in specific contexts (Cheong & Nyaupane, 2022; Degli Esposti, 2014; Mossberger et al., 2023). While individuals expect security and efficiency in the digital tax service environment, to what extent are they, in fact, willing to share facial information, considered the most sensitive type of information, with governments? How is this affected by governments choosing a third party to collect facial data and the potential of governments opting for different sourcing structures? Understanding and addressing individuals' willingness to share facial data in response to various parties providing FRT services for digital tax not only promotes American democracy but also encourages active public participation and advances data security and privacy practices in digital public services for better policymaking.

Therefore, this study delves into the public's willingness to share personal data with governments through FRT. By being informed by the literature on data sharing and the "publicness" of organizations and relying on the contextual integrity framework, this study conducts a vignette experiment to answer the following research question: how does the willingness of individuals to share facial data with government agencies vary by government FRT sourcing structures? The findings contribute to emerging AI and biometric literature on the responsible use of FRT and inform ethical policy for government contracting new tech services.

08:30-10:00 Session 13F: Open Science
Location: Room 222
08:30
Differentiating Data Reuse in Scientific Publications

ABSTRACT. Background: Demonstrating and improving FAIR principles in research data remains challenging due to limited methods for gathering reliable success measures, especially in identifying and measuring dataset reuse and investigating underlying mechanisms. To address this challenge, new methods are needed to systematically differentiate dataset reuse from original use. Current methods for measuring dataset reuse primarily rely on repository usage tracking, which counts views and downloads on dataset web pages.[1] While these statistics offer insights into dataset impact, their use is limited due to challenges in differentiating types of use, investigating downstream impact, and reliably measuring use due to analytics gaming and automated internet activities.

Rationale: A more reliable measure of research dataset reuse can be achieved through dataset citations in scientific research publications. The mention of a dataset in a publication implies that the dataset was reviewed, analyzed, and potentially reused by the author(s), going beyond viewing or downloading. The challenges here are two-fold: first, citing datasets is not yet a standard practice in science, and reference sections of scientific publications do not reliably cover all dataset citations, with many datasets being cited “informally”[2]; second, some mentions of datasets in scientific publications are done by the dataset producers, or those researchers who developed the dataset and publication as part of one research program, and these instances need to be excluded from measures of dataset reuse. Given the needs and associated challenges, our team pursued the identification of research dataset mentions in scientific publications and the subsequent differentiation of publications resulting from grant awardees who developed datasets as part of their research (dataset producers), and other researchers who have reused those datasets (dataset “re-users”) for subsequent analysis of factors that relate to successful dataset reuse.

Research Questions: This project supports FAIR principles in research data by leveraging scientific publication data and metadata to understand biases not evident in repository data alone and provide evidence of dataset reuse and influencing factors. The project aims to address key questions that cannot be answered through repository data alone, including questions around (1) the extent to which dataset mentions and citations can be disambiguated by user (data producer or “re-user”); (2) the researchers who cite data and their characteristics (impact networks); (3) the differences between data that is reused and data that is not; and (4) the differences in practices around datasets that are reused and those that are not (including differences based on discipline, geography, funder, and dataset type).

Methods: To address the first of the two challenges mentioned above (informal data citations in publications), the project team searched for a set of AIDS/HIV datasets in the full text of a subset of Scopus-indexed peer-reviewed publications using machine-learning models. The team started with models that were previously developed in a Kaggle competition and subsequently used as part of the Democratizing Data pilot projects and NIH Generalist Repository Ecosystem Initiative. [3] The models were deployed to identify if these datasets were mentioned in publication full text, and an additional model was used to search through references. To address the second challenge (differentiating dataset citations by data producers and “re-users”), the team is testing methods to differentiate between dataset producers and dataset re-users through a combination of approaches, focusing on elements such as overlap between study PI and publication author(s), overlap between study affiliation/location correlation and publication affiliation(s), mention context, funding acknowledgments and identifiers, data availability statements, and other textual content.

Results: Preliminary results comparing study/dataset metadata and publication metadata associated with Longitudinal Studies of HIV-Associated Lung Infections and Complications (Lung HIV) show that greater overlap between study location metadata on NIH-funded clinical trial datasets and affiliation metadata on scientific publications is indicative of those publications resulting from the dataset producer, while less overlap indicates the opposite. Here, the overlap was calculated based on string-matching text on location/affiliation data between the dataset metadata and publication metadata, and results were validated manually through reading publication text surrounding the dataset mention(s) in each article. These results support our hypothesis that overlap between metadata elements on affiliations indicate that the publications are likely to have stemmed from the original study/researchers. We hypothesize that the same will be true of metadata on funding identifiers, researchers, and publication context; for this reason, we anticipate continued work linking dataset metadata to publication metadata for those publications that mention them will be able to help reliably differentiate dataset producer publications from dataset re-user publications.

Next Steps and Conclusion: In addition to continuing the work mentioned above, next steps include investigating the drivers and barriers to data use/reuse by conducting a comparative analysis of owner-disclosed dataset information across reused, used, and unused datasets. Dataset characteristics, such as DOI presence, data type, granularity, delivery format, documentation quality, and linkage to publications, will be analyzed to understand factors promoting downstream data use/reuse. The hypothesis is that dataset quality and disclosure will be linked to greater downstream data use/reuse. Overall, this research is advancing FAIR principles in the research data ecosystem by developing robust methodologies for distinguishing between data use and reuse. This will enable insights into variables that influence data reuse, allow the identification of impactful researchers both as data sharers and data “re-users”, and ultimately incentivize data contributions toward a more open scientific community.

Acknowledgements: Research reported in this publication was supported by the Office of Data Science Strategy of the National Institutes of Health under award number 1OT2DB000002. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

References: [1] Fenner, M., Lowenberg, D., ... & Chodacki, J. (2018). Code of practice for research data usage metrics release 1. https://doi.org/10.5281/zenodo.1470551 [2] Irrera, O., Mannocci, A., Manghi, P., & Silvello, G. (2023, September). Tracing data footprints: Formal and informal data citations in the scientific literature. In International Conference on Theory and Practice of Digital Libraries (pp. 79-92). Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-43849-3_7 [3] New York University, Elsevier, & Johns Hopkins University (n.d.). Democratizing Data Search and Discovery Platform User Guide. https://democratizingdata.gitbook.io/userguide

08:45
Competition or Diversion? Effect of Public Sharing of Data on Research Productivity of Data Provider

ABSTRACT. Sharing research data is crucial for advancing scientific progress, and various institutional efforts have supported this endeavor. However, scientists often hesitate to make their data publicly available due to concerns about the potential negative impact on their research productivity. They fear that sharing data may enable competitors to address similar research problems, thus intensifying competition and limiting the exclusive publication opportunities for the data providers. Although the existence of this “competition” effect is acknowledged, literature on scientists’ strategic choice of research problems within priority-based scientific reward systems, their competitive behavior in resource sharing, and reasons for the scientists to seek others’ resources for research raise theoretical ambiguity regarding these concerns. Data providers have a time advantage in publishing their findings before data recipients because they have a head start using the data for research. Consequently, data recipients may consider competing with data providers for the same publication opportunities risky, and they may be motivated to pursue different research inquiries or those who can use the data to address different research inquiries are willing to use the data in the first place (referred to as “diversion”). The probable presence of the “diversion” effect prompts us to ask an empirical question about how public sharing of research data affects the research productivity of data providers in practice. Despite the importance of this question for improving institutional designs that support scientists’ sustainable data-sharing practices, existing research provides limited clues. For instance, studies have examined scientists’ internal motivations for resource-sharing, their resource-sharing practices, and whether scientists gain benefits from public data sharing, such as increases in citations to their research works. Although these studies contribute to understanding what motivates scientists to share their research data by shedding light on the “benefit” side of doing so, their salient concern in sharing their data—the possibility of losing publishing opportunities—has remained underexplored. This research dearth may be due to the empirical challenges in identifying the causal impact of the data sharing; because the data-sharing is often determined by the scientists’ endogenous decision (i.e., scientists are likely to share data based on the impact on their research productivity or those who expect “reward” such as increased citations to their research work are willing to share their data in the first place), it has been challenging to investigate the causal effect of “public sharing” of research data on the research productivity of data providers. The lack of evidence of the impact is especially notable considering the recent development of various institutional measures to encourage research data sharing as part of the open science movement. Because the success of these efforts hinges on the extent to which the data-sharing scientists gain benefits or encounter disbenefits from sharing their data, the evidence can serve as an important foundation for improving policy to encourage data-sharing practices among scientists. To fill this gap, we investigate the impact of scientists’ public sharing of research data on their research productivity and the underlying mechanisms. Because scientists often make data-sharing decisions at their own discretion, weighing potential benefits (e.g., increased citations to their research) against potential costs (e.g., loss of exclusive publication opportunities), comparing the research productivity of data-sharing scientists to those who did not may introduce bias due to self-selection. Analyzing scientists’ data-sharing under exogenously imposed regulations, such as mandated data-sharing requirements, rather than their own decisions, can help mitigate this endogeneity. To this end, we utilize the policy initiatives by the U.S. NIH to promote research data sharing. In 2008, the NIH started to require investigators to share their data through the Database of Genotype and Phenotype (dbGaP) if the data were obtained through Genome-Wide Association Studies (GWAS) supported by the NIH. This rule was expanded in 2015 by the genomic data-sharing (GDS) policy, which included large-scale genotype, phenotype, and sequence data on human or nonhuman genes generated through NIH-funded studies to be publicly shared via the same archive. As a result, more investigators had to publicly share their data via dbGaP, significantly increasing the number of shared data entries in this archive. Since these policy changes mandated data sharing by NIH, one of the largest funding agencies in the United States, and while individual scientists subject to this rule have limited discretion over their data-sharing decisions, analyzing the impact of scientists’ data sharing on their research performance can help mitigate the typical endogeneity issues described above. Using data from NIH-sponsored research projects that shared data in dbGaP from 2008 to 2020, and their matched projects, we analyze changes in the research productivity of data providers after the data are publicly disclosed through dbGaP. Our fixed-effect (FE) project-year panel difference-in-differences (DID) regression analysis and synthetic control (SC) approach found no evidence of the detrimental impact of publishing data on the data providers’ research productivity, which is consistent with the prevalence of the research diversion effect. Previous studies suggest that the prevalence of the diversion over the competition effect may vary depending on various factors, including the data providers’ stage of career (seniority) and the degree to which other scientists are interested in reusing shared data for research. Our additional analysis found no evidence of heterogeneity in the impact of the data sharing on the data provider’s research productivity by these factors. To directly examine whether the “diversion” effect was behind the null impact, we analyze the textual similarity between publications by data providers and recipients by applying a natural language processing (NLP) method. Our analysis shows that data recipients are inclined to address different research problems from those of data providers, and it moderates the prominence of the diversion over the competition effect.

09:00
Improving research productivity: hindering factors, remedies, and the promise of open science. A systematic review

ABSTRACT. Investment in science, technology, and innovation has grown substantially for decades. For example, OECD countries increased gross domestic expenditure on R&D (GERD) as a proportion of GDP from 2% in 1991 to 2.7% in 2021. This increase is in line with the understanding that R&D is a core driver of economic growth (Bush, 1945; Jones & Summers, 2020; Salter & Martin, 2001), and can contribute to the pursuit of other Sustainable Development Goals (European Commission, 2016; UNCTAD, 2018).

Recent research, however, has reminded us that an increase in R&D expenditures does not translate in a proportional increase in the value of innovations. Research productivity, measured as a relation between inputs and outputs (or outcomes) of research, declines over time, as documented across several sectors (Bloom et al, 2020). In health care, for example, studies show that increased R&D investment have not yielded a proportional increase in new drugs (Garnier, 2008; Scannell et al., 2012). This is considered to be due to some extent to the secular expansion of the knowledge frontier, which increases the difficulty of discovering new ideas (Bloom et al., 2020; Jones, 2009). While the knowledge frontier has been expanding forever, research productivity seems to be declining more rapidly in recent years than in the past. There is though little systematic effort to better understand the potential causes behind changes in research productivity.

This paper seeks to address this gap by systematically reviewing the factors that may hinder research productivity. We do so by means of a systematic review of the literatures analyzing research inputs and outputs across different sectors and fields of science. In addition to hindering factors, we also systematically review the remedies that can contribute to improve research productivity.

Among those, the paper further reviews the role of open science (European Commission, 2009) as a research practice which may increase research productivity, ceteris paribus. Open science has emerged as a new research paradigm that can contribute to better address sustainability challenges that are poorly addressed by established scientific practices. Crucially for this paper, open science practices can directly influence research productivity by reducing the cost of research inputs (e.g. shared data and equipment) (Fell, 2019) and accelerating the achievement of research outputs through collective intelligence (Nielsen, 2013). Despite these potentials, the literature has not linked the role of open science in contributing to research productivity. This paper seeks to address also this second gap.

We conduct a systematic literature review of over 200 selected documents on research productivity. We do this in three phases. In the first phase we group papers by the definition (implicit or explicit) that they use to refer to research productivity. We identify three different approaches and foci, which characterize different research communities. Publications in the scientometric framework define research productivity as a relation between research inputs and pieces of knowledge codified in bibliographic outputs: scientific publications or patents. Publication in the innovation framework define research productivity as a relation between research inputs (e.g. funding, human capital, etc.) and innovation outputs (e.g. technologies, patents, ideas, solutions to problems) or (mostly) economic outcomes (e.g. labor productivity, total factor productivity, etc.). And publications in the societal impact framework define research productivity, as the relation between research inputs, how they are organized or prioritized and potential effects on society.

Next, we focus on the most cited literature within the innovation framework, which has dominated the discussion on the decline of research productivity and its consequences. Using this literature, we study two key issues central to current research policy and practice debates: (i) the factors that may be hindering research productivity; and (ii) the remedies to increase the value of research investments into valuable outputs (e.g. innovations).

We find four main results. First, on the questions about whether research productivity has been declining across sectors, our literature review shows that there is no consensus. 33% of the documents refer to a decline. These studies are mainly focused on R&D intensive sectors. Second, on the question about the main factors that hinder research productivity, most documents discuss factors related to R&D routines. While relevant, factors related to R&D incentives, the fast expanding endless frontier, knowledge recombination and market pressures are less studied. Third, concerning the question about what remedies may increase research productivity, studies largely focus on remedies to improve R&D routines. Remedies related to governance, increasing R&D resources, setting strategies for R&D priorities, management of organisations, and access to human capital are less studied. Fourth, despite the focus on improving R&D routines to address limitations that may hinder productivity, we observe that most categories of hindering factors can be addressed by a combination of several remedies, and each remedy can contribute to addressing several hindering factors.

Finally, we expand our review and include all papers within the innovation framework that analyze the role of open science practices in influencing research productivity. The literature identifies three main open science practices that can influence research productivity: open data, open source, and open collaborations. The main contribution of open science is to improve R&D routines, tackling the most important category of hindering factors. Open science practices, including data sharing and transparent methods, enhance research efficiency, quality, and reliability by promoting resource reuse, collaboration, and the use of digital tools to accelerate research and facilitate discovery. The second most important contribution of open science practices is to increase R&D resources, through combining different sources of knowledge, data and experiences. These practices break down knowledge silos and accelerate research progress by fostering knowledge sharing, collaboration, and collective intelligence, addressing key hindering factors of research productivity. Finally, the last category of remedies has to do with the transformation of research incentives through changed Governance that take on board open science mandates. These mandates of open access and collaboration, foster more effective knowledge sharing and shift research towards exploration and curiosity, encouraging researchers to take on ambitious and risky projects beyond the limitations of traditional incentives.

10:30-12:00 Session 14A: Agrifood System Innovation and Policy
Location: Room 233
10:30
Knowledge Networks Shaped the Innovation Pathway(s) of U.S. Seed Patents (1930–2022)

ABSTRACT. Motivation: Environmental sustainability challenges – including the climate crisis and biodiversity loss – necessitate transformative innovation. But divergent visions of sustainability drive the goals that innovators pursue. For some, sustainability hinges on advancements in technologies that increase efficiency. For others, sustainability is about societal transformation, redistribution of power, or environmental preservation. These definitions are shaped by the professional communities in which innovation occurs. Professions are defined by specialized expertise and common modes of practice. Shared norms incentivize a particular professional orientation. In this paper we analyze how professions constitute knowledge networks that define innovation goals, practices, and partnerships. Knowledge networks play a central role in innovation, as inventors draw on existing inventions, scientific research, and collaborative relationships. Thus, the professional scientific and research communities in which innovators are embedded drive innovation pathways. Bibliometric and patentometric studies often examine these networks to understand technological impact (e.g., breakthrough innovation), complexity (e.g., multidisciplinary knowledge integration), or economic value creation. Yet, less is known about how these networks drive what invention qualities are pursued (e.g., sustainability-related traits such as increased efficiency versus accessibility) and how different invention qualities do or do not propagate within these networks. Patents represent multiple claims to novelty, and so understanding which attributes effectively diffuse is critical for understanding pathway trajectories.

Question: How are different sustainability pathways represented in inventions, and how do sustainability traits of inventions diffuse through professional knowledge networks?

Case: We consider this question in the agricultural sector, focusing on plant breeding innovation in the United States (US). Plant breeders innovate by developing seeds, a foundational input for agricultural production. But the plant traits they prioritize – for sustainability reasons or otherwise – and how they go about innovation in plant breeding can vary considerably. For example, there are divisions between conventional agronomists who see sustainability as purely a science of input-output optimization and agro-ecologists who want innovation to increase biodiversity and empower growers. Thus, the invention qualities associated with new breeds include a wide range of attributes such as aesthetics, yield, hardiness, ease of growing, input requirements, etc. Plant breeding innovation sits at a critical juncture. The mounting pressure of climate change sharpens these divisions, as the definition and pursuit of innovation by professional communities involved in plant breeding might not fit new sustainability challenges. By analyzing these dynamics, this research seeks to reveal how professional knowledge networks diffuse (or constrain) diverse sustainability efforts.

Data: We rely on plant patent (PP) data from 1930 to 2022. We analyze only agricultural crops, the targets of agricultural innovation for sustainability, narrowing the number of patents used in the analysis from 33,154 to 5,409. The patent data relevant to this analysis include the following: abstracts (summary of invention novelty), assignees (patent owners), patent citations, and academic paper citations. For these data we rely on both information provided by the USPTO PatentSearch API, as well as the Reliance on Science database that links patent paper citations to the open source bibliometric database OpenAlex. In a separate paper, we classify abstracts based on their different claims to novelty for the plant using large language models. For this paper, we focus on two plant traits relevant to sustainability and climate adapation: abiotic stress resistance (the ability to withstand harsh or extreme climates) and biotic stress resistance (the ability to withstand living stressors like pests or disease).

Professional knowledge networks are constructed at two levels. First, we create patent-level networks where nodes represent patents, linked by patent citations. Second, we create individual-level networks, where nodes represent inventors or authors of academic works, linked through co-invention, co-authorship, and/or citation. In each of these networks, we assign node attributes based on the different sustainability traits that their works (inventions or articles) represent.

Analysis: Drawing on theories of knowledge diffusion and network influence, we analyze these networks to map the distribution and pathways of sustainability traits. First, we use statistical network analysis to describe the distribution of sustainability traits across the networks. The network representations of shared innovation traits and shared innovation partnerships allows us to analyze hyperdyadic patterns. A dyad refers to a connection between two entities (e.g., two patents or two researchers). Statistical network analysis captures hyperdyadic structures that involve multiple connections, thus allowing us to model structural differences in community composition and connectivity. Second, using network growth modeling (a latent order logistic regression model), we quantify the diffusion of sustainability traits, identifying influential nodes and pathways that facilitate (or hinder) diffusion. This analysis allows us to identify paths (and potential roadblocks) for potential diffusion in the future, allowing us to understand the trajectory of different sustainability pathways in plant breeding innovation.

10:45
Exploring innovation portfolios of agricultural businesses: A Modern Portfolio Theory (MPT) approach

ABSTRACT. One of the most dominant paradigms in innovation studies has often framed innovation at the firm level, primarily as a series of product and process developments (Kavadias & Chao, 2007). Most studies under this view, have mostly focused on innovation as a series of isolated decisions or individual projects, without accounting for the complexity of managing innovation as an integrated portfolio of innovation activities (Hidalgo & Albors, 2008). The general assumption underlying much of this research is that innovation outcomes could be understood by examining innovation projects individually, rather than in relation to a firm’s broader strategic framework (i.e. the overall strategy of the firm).

A more recent and emerging paradigm has attempted to challenge the traditional more establish view by taking broader/holistic perspective of innovation at firm-level (Eckert & Hüsig, 2022; Klingebiel & Rammer, 2014; Meifort, 2016). Studies under this emerging view have argued that firms often manage not just individual innovation projects but rather entire portfolios of innovation activities, each with its own risk and return profile. Gupta, Tesluk, and Taylor (2007) offered a paradigm shift by arguing that innovation outcomes are shaped by both firm-level determinants and by multi-level factors, suggesting that the success of any innovation depends on how resources are allocated across different innovation projects. Similarly, Slater and Zwirlein (1992) noted that firms need to adopt quantitative, financially driven approaches to manage their innovation portfolios effectively, much like financial portfolios are managed to maximise return while minimising risk. These approaches make it easier for firms to allocate resources in a manner that warrants the greatest potential for innovation success.

The idea of managing innovation as a portfolio draws heavily on Modern Portfolio Theory (MPT), as pioneered by Markowitz (1952), which was initially applied in the field of finance and investments. According to MPT, the key to investment success lies in the diversification of assets, balancing high-risk, high-reward investments with safer, low-risk assets. When applied to the field of innovation studies, modern portfolio theory offers a framework for understanding how firms can balance the risk-return trade off in their innovation efforts. Girotra, Terwiesch, and Ulrich (2007) argue that innovation portfolios should be diversified across different types of innovation—ranging from radical innovations to incremental improvements— in order to maximise long-term value while mitigating the inherent risks of innovation failure.

In the context of agricultural businesses, this means that focusing exclusively on the success of an individual innovation project in a firm—such as the development of a new drought-resistant crop—may provide useful understandings, but it may not necessarily capture the entire complexity of innovation. Meanwhile, the same firm may also have several other concurrent innovation projects alongside the development of new crop varieties (e.g. investment in precision agriculture technologies for crop monitoring). Although analysing both innovation projects individually may indicate whether each innovation project succeeds or fails in isolation, but such an approach does not consider how the two projects may complement each other as part of a broader innovation strategy of the firm. This is because, on one hand, agribusinesses are expected to invest in new innovations, such as precision agriculture and or advanced ICTs, which carry significant risks but have the potential to transform the entire sector. On the other hand, these firms must also pursue more incremental innovations, such as improved farming practices or better distribution processes, which offer more immediate, albeit smaller, returns on innovation.

What is clear from the both paradigms (the emerging and traditional), is that firms cannot rely on a one-size-fits-all approach to innovation; they must develop a strategic balance to guarantee that the high-risk, high-reward innovations are supported by safer, incremental ones. Yet, as Loch and Kavadias (2002) suggest, this balancing act is highly dynamic and requires constant reassessment and reallocation of resources in response to evolving market conditions and other emerging, sometimes unpredictable risks.

Agricultural firms therefore often face this challenge of balancing risk and return when selecting innovation activities, yet there remains limited understanding of how to construct an optimal portfolio of innovation activities that effectively maximises innovation outcomes while managing associated risks. While much of the existing literature has explored the risk-return trade-off in innovation at a conceptual level, few studies have applied this view directly to the agricultural sector.

This study seeks to bridge that gap by applying the principles of modern portfolio theory to agricultural firms by examining how agribusinesses manage their innovation portfolios in the face of adverse challenges such as limited access to finance, rising input costs, and increased global competition. As such, this study tries to answer two primary research questions:

1) What are the characteristics of a successful innovation portfolio of agricultural businesses? 2) How can a successful innovation portfolio of agricultural businesses be constructed to maximise returns and minimise risks?

Using data from the South African Agricultural Business Innovation Survey (AgriBIS) 2019-2021, this study applied Modern Portfolio Theory (MPT) to explore the characteristics of an optimal innovation portfolio for agricultural businesses.

The results found that portfolios prioritising Training for Innovation Development and In-house R&D provided the highest returns across multiple innovation outcomes, including product innovation, intellectual property development, and cost reduction, with comparatively lower risks. Outsourced R&D, while yielding higher returns in business process innovation and market expansion (reaching new markets), presented higher risk and was less effective in product-related outcomes. Conversely, activities such as Acquisition of Computer Hardware consistently underperformed, showing negative returns in some cases, which indicated that technology investments alone do not substantially drive innovation without complementary strategies.

This study contributes to the emerging discourse on innovation portfolio management by moving beyond the traditional focus on individual innovation projects and exploring how agricultural firms manage diversified innovation portfolios. The findings have significant policy implications, particularly for shaping how policy makers support innovation in the agricultural sector to enable agribusiness to maximise innovation investments while minimising associated risks. Secondly, understanding the key role of diversified innovation portfolios of agricultural businesses can help with the development of policies based on distinct innovation portfolios and the heterogeneity of agricultural firms.

11:00
Expanding the Potential of Cellular Agriculture: Beyond Meat Substitutes to Sustainable Food Innovation Ecosystems

ABSTRACT. Cellular agriculture technology promises transformative solutions to some of the most pressing global challenges, from climate change to food security. As food production increasingly adopts technology, such as cultivating animal products directly from e.g. cells, it blurs the lines between agriculture and biotechnology. This creates opportunities for sustainability but also introduces complex policy challenges. Debate around cell agriculture has focused on meat substitutes, which represents a narrow view on the potential of the technology. Systemic and broad-based impacts can be argued to come from other areas of the technology. This study explores the broader potential of cell agriculture using the Technological Innovation System (TIS) framework (Markard et al. 2015)

Cellular agriculture is an interdisciplinary field that utilizes biotechnology to produce agricultural products directly from cell cultures, bypassing traditional farming methods. This approach involves cultivating microbe, animal or plant cells to create food ingredients, materials, and other products sustainably and ethically. Examples of cellular agriculture include egg white produced in microbial hosts without chicken and nutritional single cell protein like Solein® that can complement protein diversification targets.

Leveraging the TIS framework allows us to analyze the evolution of cellular agriculture. We build on the approach by Suurs et al (2010) and focus on actors, institutions, and technologies. Actors are the organizations or entities contributing to the emerging technologies. These can be divided into enactors directly involved in the development and commercialization and selectors, which influence the adoption and diffusion of the technology. Institutions define the "rules of the game" that shape how actors operate within the TIS, would these be informal or formal rules. Finally, technological factors focus on the artefacts, infrastructures, and processes integral to the development and adoption of cellular agriculture. The TIS framework allows us to answer two research questions: 1) what factors are driving or inhibiting innovation in cellular agriculture and 2) what are the underlying systemic structures in the emerging cellular agriculture field?

We utilize publication, patent and industry data for the cellular agriculture sector analysis. Publications are used to understand research intensity, knowledge creators and the dissemination of knowledge. Patents are a proxy for technological advancements, and industry search for convergence of technology. Industry data, namely start-up data, highlights resource mobilization. Aligned with Suurs et al. (2010), the data is structured in a database analyzed for functions across the three factors and ultimately what Suurs et al. (2010) refer to motors of innovation – the interactions across the system factors.

The bibliometric data was searched extending from Nyika et al (2021), who focused on meat substitutes. Our analysis expanded to alternative proteins and lipids, high-value plant products, and ingredient production. This broader approach captures more holistically the different potential cellular agriculture value chains. The publication data was searched from Web Of Science and patents from PatBase using a query developed for the broader focus*. Industry was analyzed using multiple venture databases and online materials to collect names of companies in the emerging field. The companies were then searched from Orbis database to gain detailed information from the companies.

The analysis identified 61,124 publications and 109,238 patent families that are from the period where we can expect cellular agriculture development, as defined in (Stephens & Ellis 2020). For companies, we leveraged built on top of Cellular Agriculture Greece data from 165 cell-cultivated meat and 73 precision fermentation companies, which partly overlap. This was complemented with additional companies identified through news media and venture database searches to identify 244 companies. For these, data was available in Orbis database for 119 firms.

The findings suggest cellular agriculture is on the brink of diversifying beyond its initial focus, offering solutions to a wider array of global challenges. While public focus is on meat substitutes, the TIS framework highlights a broader range of technological factors with potentially broader implications, such as replacement of non-sustainable protein sources. The data also highlighted high diversity with more niche solutions targeting replacement of commodities impacted by climate change (e.g. coffee and cacao).

Policymakers can support the diversification of cellular agriculture through funding incentives and streamlined regulatory pathways. Overcoming regulatory and societal acceptance barriers will require transparency, public engagement, and international cooperation

References Markard, J., Hekkert, M., & Jacobsson, S. (2015). The technological innovation systems framework: Response to six criticisms. Environmental innovation and societal transitions, 16, 76-86. Nyika, J., Mackolil, J., Workie, E., Adhav, C., & Ramadas, S. (2021). Cellular agriculture research progress and prospects: Insights from bibliometric analysis. Current Research in Biotechnology, 3, 215-224. Stephens, N., & Ellis, M. (2020). Cellular agriculture in the UK: a review. Wellcome Open Research, 5. Suurs, R. A., Hekkert, M. P., Kieboom, S., & Smits, R. E. (2010). Understanding the formative stage of technological innovation system development: The case of natural gas as an automotive fuel. Energy policy, 38(1), 419-431.

* SEARCH QUERY ("cellular agriculture" OR "cultured food*" OR "synthetic biolog*" OR "lab-cultured food*" OR "biotechnological food production" OR "precision fermentation") OR ("cultivated meat" OR "cell-based meat*" OR "tissue-engineered meat*" OR "synthetic meat*" OR "in vitro meat*" OR "clean meat*" OR "artificial meat*" OR "lab-grown meat*" OR "factory-grown meat*" OR "engineered meat*" OR "animal-free meat*" OR "hybrid meat*") OR ("animal-free protein*" OR "recombinant protein*" OR "microbial protein*" OR "single-cell protein*" OR "fungal protein*" OR "algae-based protein*" OR "fermented protein*" OR "synthetic dairy protein*" OR "precision fermentation protein*") OR ("cultivated fat*" OR "fermented fat*" OR "synthetic lipid*" OR "microbial oil*" OR "bioengineered fat*" OR "lipid replacement*" OR "cultured dairy fat*" OR "artificial flavor compound*" OR "functional food ingredient*") OR ("cell-cultured plant product*" OR "lab-grown cocoa" OR "synthetic coffee" OR "precision fermentation flavor*" OR "plant cell culture" OR "plant-derived bioactive*" OR "biofabricated high-value crop*" OR "molecular plant replacement*") OR ("recombinant food ingredient*" OR "cultured food ingredient*" OR "synthetic food component*" OR "cell-cultured additive*" OR "microbial ingredient production" OR "bioprocessed food input*" OR "engineered food component*") OR ("cultivated food product*" OR "alternative dairy product*" OR "cellular seafood" OR "synthetic beverage*" OR "cultured plant-based alternative*")

11:15
STI policy to take on the challenges for realizing the transformative territorial development potential of small-scale rural agro-industries in El Salvador

ABSTRACT. El Salvador, a small and neo-peripheral country in the Global South, faces significant challenges in transitioning toward a sustainable, inclusive, and resilient knowledge-based economy. These challenges are exacerbated by the fragmentation and fragility of its national Science, Technology, and Innovation (STI) system, which lacks territorial reach, inter-sectoral coordination, and connections to international centers of excellence (Arocena & Sutz, 2010; Dutrénit & Teubal, 2011; Szogs et al., 2011). This study explores the transformative potential of small-scale rural agro-industries in fostering territorial development, addressing systemic innovation barriers, and advancing sustainable economic models. It focuses on two central research questions: (1) How can small rural agro-industries overcome structural barriers to innovation? (2) To what extent are science, technology, and innovation (STI) policies addressing these barriers and supporting their development? These questions are discussed in an exploratory way leveraging long-term in-depth case studies of innovative small-scale agro-industries in El Salvador and new findings from research reveling the potentials but also significant limitations of initiatives derived from national STI and innovation for development policies, as well as the important but limited role of key national system of innovation actors. This research applies the novel theoretical and analytical framework being developed by a team of Ibero American researchers in the contexts of the CYTED-LALICS network project, which focuses on STI policies tailored to address national challenges. Emblematic case studies ACOPANELA demonstrates the balance of tradition and innovation, introducing granulated panela to meet dynamic market demands while preserving cultural heritage. By stabilizing prices and supporting family producers, the cooperative strengthens local livelihoods and safeguards the identity of panela production. Similarly, APRAINORES has driven transformative changes through fair-trade and organic certifications, adopting sustainable production practices, and constructing a medium-sized processing plant. These initiatives have not only delivered significant social and economic benefits to members but also addressed regional environmental challenges. Both cases illustrate how small-scale agro-industries have accessed resources and advanced knowledge through collaborations with organizations such as EMBRAPA in Brazil and CIMPA in Colombia. Support from entities like the Inter-American Development Bank’s Multilateral Investment Fund (IADB-MIF) and NGOs has been instrumental. However, the leadership and organizational capacity of local actors, including educated youth, have proven essential in leveraging these resources to sustain innovation and competitiveness (Cummings 2007, 2009, Cummings & Cogo 2016, Cummings & Peraza 2024, Peraza & Cummings forthcoming). Public sector STI capabilities The National Center for Agricultural and Forestry Technology (CENTA) has played a pivotal role in fostering innovation within small-scale rural agro-industries. Through participatory extension programs, CENTA has involved farmers in decision-making, built trust in agricultural innovations, and promoted the adoption of locally adapted technologies. Farmer Field Schools have advanced key practices such as soil conservation, crop diversification, and sustainable resource management, enhancing farmers’ productivity and resilience to climatic and market challenges. Furthermore, CENTA’s gender-focused programs have encouraged women’s active participation in training and governance, embedding gender-sensitive approaches into rural development strategies and expanding the impact of capacity-building initiatives (Hobbs et al. 1997). Despite these achievements, CENTA faces significant systemic challenges. Inadequate funding, fragmented coordination between research and extension, and the absence of tailored public policies hinder its ability to support transformative innovation. Additionally, there is limited infrastructure for value addition and industrialization, which restricts small-scale agro-industries’ access to competitive markets (Hobbs et al. 1997). University contributions and limitations Public and private universities have provided valuable support to small-scale rural agro-industries through applied research and technical advisory services. Targeted projects have addressed critical issues such as crop genetics, soil management, and value-added techniques. However, these efforts remain sporadic and overly dependent on external funding, limiting their broader applicability and long-term impact. The potential for universities to act as connectors between local needs and global knowledge remains underutilized (Cummings 2016). Weak implementation of relevant STI Policies The study also evaluates the broader context of STI policies in El Salvador. The creation of the National Policy for Innovation, Science, and Technology (ICyT) and related initiatives—such as the Vice-Ministry of Science and Technology and the transformation of CONACYT—represented progress. However, these efforts have not been fully implemented or integrated. Structural deficiencies, including a lack of sustained political will and fragmented policy instruments, continue to undermine the sector’s potential. A case in point is the conceptualization of the Agro-Industrial Technology Park (PTA), which aimed to foster collaboration among academia, the private sector, and government. Despite its promise to drive rural innovation through knowledge transfer and entrepreneurship, the initiative relied heavily on external funding from entities like the IADB, which was not secured, leaving the project unrealized (Cummings 2015). Policy Implications and Recommendations The findings underscore the need for a comprehensive STI policy framework that integrates strategies for territorial economic development and productive transformation. Such a framework must prioritize collaboration across academia, government, private sectors, and international partners to leverage resources and expertise effectively. Key priorities include strengthening the territorial reach of STI policies, fostering capacity-building at the local level, and decentralizing resources to empower small-scale agro-industries. To address existing systemic barriers, STI policies must focus on improving coordination between research and extension, providing targeted funding for infrastructure, and tailoring initiatives to the unique needs of small-scale rural agro-industries. By aligning local capacities with global technological advancements and fostering inclusive governance models, these industries can transition from fragmented efforts to integrated drivers of sustainable and equitable development. Initial conclusion Small-scale rural agro-industries in El Salvador hold immense potential to serve as catalysts for environmentally sustainable and inclusive territorial development. However, realizing this potential requires overcoming structural barriers through integrated STI policies and coordinated action. Public institutions like CENTA, universities, and international partners must work in concert to address critical gaps in funding, infrastructure, and policy implementation. By leveraging their resilience, tradition, and capacity for innovation, small-scale agro-industries can contribute significantly to a sustainable rural economy, enhancing national productivity and social equity.

10:30-12:00 Session 14B: Exploration and Growth in Research
Chair:
Location: Room 225
10:30
Assessing the exploration of basic research: An objective ex-ante measurement and project-level influencing factors

ABSTRACT. Introduction Exploratory basic research is the key to scientific and technological progress, yet effectively supporting such research remains a core challenge in science funding (Bollen et al., 2013; Hicks, 2012; Ioannidis, 2011). Existing evaluations of basic research exploration can be categorized into two approaches: ex-ante, based on proposal data, and ex-post, focusing on research outputs. Ex-post evaluation has been expanded through scientometric studies, which have deepened insights into related concepts such as novelty, creativity, and innovation (Chen & Ding, 2023; Foster et al., 2015; Huang et al., 2022; Jeon et al., 2023; Lee et al., 2015; Luo et al., 2022; Matsumoto et al., 2021; Uzzi et al., 2013; Yang & Wang, 2024). In contrast, ex-ante evaluation remains largely dependent on subjective peer reviews. There are evidences suggesting that peer review may hinder research exploration, due to biases and noise, thereby affecting the effectiveness of funding (Banal-Estanol et al., 2015; Boudreau et al., 2016). In response, we propose an objective ex-ante measure to evaluate the exploration of basic research through text mining of grant proposals, and further analyze the project-level factors that influence research exploration using CatBoost algorithm and SHAP-based interpretation framework. Specifically, this study aims to address the following research questions: (1) How to measure the exploration of basic research from an objective and ex-ante perspective? (2) What project-level factors influence the measurement results regarding the exploration of basic research, and what impacts do they have?

Research framework (1) Measuring the exploration of basic research through text mining of grant proposals According to the theory of knowledge recombination, new knowledge emerges from atypical combinations of existing knowledge elements, and quantifying such atypicality is the key to measuring exploration. Unlike publication data, basic research grants typically lack explicit citation networks, necessitating an alternative approach focused on the research content itself, which is reflected in the titles and abstract texts. Therefore, this study uses text mining techniques to assess the exploration of basic research grants, which includes two main steps: a) Extracting semantic vector and constructing knowledge combination space We begin by utilizing the SciBERT model along with the KeyBERT method to extract the top five bigram keyword-level semantic vectors to represent the research content of each grant. Next, we generate ten combined keyword semantic vectors by averaging the semantic vectors for each possible pair of keywords. Finally, we aggregate all of these combined keyword semantic vectors across the grants to construct a knowledge combination space. This approach is applied to all grants awarded by the NSFC and the NSF from 1995 to 2019, with the data collected from the Dimensions database, following necessary data preprocessing. b) Calculating the exploration for the sample grants For each combined keyword vector associated with NSFC-funded grants approved between 2005 and 2019, we traverse the historical knowledge combination space, which is restricted to grants funded within the preceding ten years of the target grant, to identify the ten most similar combined keyword semantic vectors. Subsequently, for each sample grant, we calculate the typicality of the target keyword combinations, using the 10th percentile result as a measure of typicality at the grant level. Finally, we transform the grant-level typicality into an exploration measurement. (2) Analyzing the project-level influencing factors of basic research exploration a) Identifying the feature variables of basic research grants Taking the NSFC grants as research sample, we identify five key project-level feature variables: grant type, supporting department, grant approval year, type of awarded institution, and status of awarded institution. Each of these variables is further classified into specific categories to facilitate analysis. b) Analyzing the potential impacts on exploration This study employs the CatBoost algorithm as traditional linear regression models are not well-suited for capturing relationships between categorical variables and dependent variable, and often fail to discern the varying effects of these variables. Specifically, the NSFC grant sample is divided into a training set and a validation set at an 8:2 ratio. The CatBoost regressor is trained on the training set, with the learning rate set to 0.1 and other parameters left at their default values. Since the CatBoost regressor is a “black-box” model, the SHAP (SHapley Additive exPlanations) framework is applied to interpret the regressor on the validation set.

Main results (1) The objective and ex-ante measurement of basic research exploration serves as an effective supplement to the peer review mechanism of scientific funds. This indicator performs well in capturing differences in grant exploration across varying characteristics. Additionally, the measurement has reasonable generalization capability and can be applied to assess basic research grants funded by other agencies, with potential accuracy improvements through expanding the sample size. (2) By analyzing the project-level influencing factors of basic research exploration, we found strong effects of grant-specific features on exploration. For instance, grant exploration gradually decreases as the approval year becomes more recent; grants in application-oriented disciplines exhibit higher exploration compared to those in more theoretical disciplines; conventional grant types like Young Scientists Fund and General Programs have higher exploration than large-scale or talent-focused types, while talent-focused grants show the lowest exploration. Additionally, grants awarded to leading public research institutes exhibit lower exploration than those awarded to top universities. These findings are of significance for funding agencies seeking to optimize funding mechanisms.

10:45
Predicting which large and growing areas of research will decline

ABSTRACT. This study focuses on a relatively unexamined phenomenon – the large and historically high-growth research areas that subsequently experience a significant drop in growth rates. This phenomenon is critical to very practical issues such as career planning and funding. For instance, PhD students that join research communities that subsequently decline may not be able to find employment after they graduate. This has happened in some areas of biology and is currently happening in some areas of computer science.

In previous work we have developed methods to predict which research communities (RCs) will experience exceptional growth. RCs are identified in a global map of science created using direct citation clustering of the full Scopus database. In this study, we use a similar model to predict which large and growing RCs will experience a significant decrease in growth rate.

The model used in this study contains 85,160,540 documents clustered into 93,418 RCs using the Leiden algorithm. Of these, 51,785,950 documents from 1980-2018 are Scopus-indexed. The remainder are non-indexed documents that are cited at least twice by the indexed documents, and for which we can retrieve sufficient identifying metadata from the citing records.

Data from the next five Scopus file years (2019-2023) were added to the RCs in this model using the method detailed in Boyack & Klavans (2022). This included not only papers published from 2019-2023, but also earlier papers that had not yet been cited enough to be assigned to an RC. Additional non-indexed documents that were cited by these more recent documents were also added to the RCs. In total, 22,820,168 documents were added, of which 17,844,566 were indexed documents, and 4,975,602 were non-indexed documents. The five-year annual growth rate of each RC from 2018-2023 was calculated using this final model.

For prediction, only the data in the original model (through 2018) was used. Thus, no forward information was used to predict the growth rate that would be experienced as of 2023.

Large RCs were identified as those with at least 1000 indexed publications as of 2018 and at least 100 indexed publications in the year 2018. This resulted in a sample of 6,293 RCs. The CAGR of this group, from 2013 to 2018, was 3%. We further subsetted this group to those with >6% CAGR, resulting in a sample of 2,185 RCs. Two thresholds were then used to identify which of these RCs experienced a significant decrease in growth rate: the CAGR from 2018-2023 had to be less than 6% and the decrease in CAGR (2013-2018 to 2018-2023) had to be greater than 6%. 758 RCs met these criteria and were classified as “large, growing RCs with future decline”.

Fifty-seven independent variables using the 2018 version of the model (to avoid future information) were tested to see which would do best at predicting the 758 RCs that experienced a significant decline in growth rate. These majority of these variables were time series related to size, authors, and relationships between RCs. The best overall predictor was reference vitality, which is described in Klavans et al. (2020). This variable is also the key variable that we used 30 years ago to effectively identify mature fields for SmithKline Beecham (Norling, Herring, Rosenkrans Jr., Stellpflug, & Kaufman, 2000) when recommending which fields were ripe for budget cuts.

The likelihood of randomly nominating a declining RC within this set was 34.7% (785/2185). Thus, to be effective, our predictions must be much better than this. They are. Our predictions were 90% accurate for the top 10 nominations, 80% accurate for the top 20 nominations, and 68% accurate for the top 100 nominations. Additional details about the top 10 nominations are provided below.

CAGR (%) RC Field 2013-18 2018-23 Change Decline Phrase 6686 Bio 24.2 -5.5 -29.7 Yes Draft genome sequence 1897 Comp 6.3 -6.2 -12.6 Yes Versatile video coding 1248 Comp 8.3 -5.9 -14.2 Yes Fiber inter-core crosstalk 1114 Soc 29.8 7.0 -22.8 No Big data 4990 Comp 13.6 -3.2 -16.8 Yes Lossy networks 13645 Med 8.2 0.6 -7.6 Yes Xpert MTB/RIF 1030 Comp 20.0 -0.9 -20.9 Yes Android malware 18416 Chem 26.8 -1.2 -28.0 Yes Graphene aerogels 323 Comp 28.5 -0.7 -29.1 Yes Control plane 4927 Eng 11.2 1.5 -9.7 Yes Li-O2 batteries

We are currently developing thick descriptions for each of these research communities to provide better insights into why they underwent this unusual pattern of extreme growth and decline. For example, the first research community (6686) focuses on the development of bioinformatics tools (such as VFDB, PATRIC, Bactopia, and Bakta) for analyzing bacterial genomes. Significant publication growth is associated with the building of these tools. The rate of publication drops once the tools are created. We are currently investigating whether the literature that cites the use of these tools tends to become dispersed into specialized applications, such as the genome of a probiotic bacteria or the genome of a virulent bacteria in a hospital setting. Detailed descriptions of multiple examples will be given in an oral presentation.

References

Boyack, K. W., & Klavans, R. (2022). An improved practical approach to forecasting exceptional growth in research. Quantitative Science Studies. doi:10.1162/qss_a_00202

Klavans, R., Boyack, K. W., & Murdick, D. A. (2020). A novel approach to predicting exceptional growth in research. PLoS ONE, 15(9), e0239177. doi:10.1371/journal.pone.0239177

Norling, P. M., Herring, J. P., Rosenkrans Jr., W. A., Stellpflug, M., & Kaufman, S. B. (2000). Putting competitive technology intelligence to work. Research Technology Management, 43(5), 23-28.

10:30-12:00 Session 14C: Emerging AI Technology & Governance
Location: Room 235
10:30
Artificial Intelligence's Past as Prologue: a (re-)geopoliticization of technology

ABSTRACT. What can AI’s policy past teach us about its future? How can states increase capacity for regulatory impact on AI technology?

Our current moment is not the first time artificial intelligence (AI) technology has been promised, funded, hyped, and feared. This paper explores a parallel episode in 1984 as the Reagan administration was intervening in United States’ antitrust architecture to facilitate the computing industry’s research and development and while US military’s DARPA was actively pursuing AI development through their Strategic Computing Initiative. Using an empirical archive of policy documents, media coverage, and US Congressional debates, I explore the ‘specter’ of Japan’s Ministry of International Trade and Industry (MITI) and the fear in the US that the Japanese ‘Fifth Generation Project’ would leapfrog American technologies. Despite the lack of expert scientific consensus, once the idea was seeded that Japan’s AI technology was a threat, the fear and hype became a political tool used by a variety of actors, to advance agendas and justify decisions, the consequences of which we see today.

This paper puts this political history in conversation with the technological narrative. While algorithmic advances made in the 1980s are still in use, breakthroughs were achieved when these were combined with centralized computational power and vast human-generated data. AI’s capabilities are only possible through a small number of the largest corporations to have ever existed, reinforcing monopolistic winner-take-all dynamics. Most AI we have today is only possible through the infrastructural capacity of a small number of Big Tech companies, a significant change from the AI envisioned in previous iterations.

This combination of AI technology and monopoly platforms has underscored two related political struggles. Globally, and locally, we see a new energy for pushing back against technology, including across unlikely coalitions and in various legal and regulatory domains, with calls to ‘break up Big Tech,’ what some have referred to as the “techlash” – growing wariness and opposition to algorithmic technologies and the corporations that facilitate them. At the same time, states are (re-)invoking AI technology and its components in geopolitical struggles for dominance, particularly between the US and China. Big technology is repeating as a site of nationalistic politics for global dominance.

On the one hand, we see a re-politicization of technology and rising desire for a change in dynamics between state v. platform. Simultaneously, new sites of geopoliticization of technology in state v. state conflicts are emerging with questions over technological control and capacity at the center. To understand this current moment, this paper revisits a parallel episode in 1984, when the central debate was of the political economy question: state v. market. The critical juncture bears striking resonance to today: AI research and its emerging industry was similarly experiencing a rollercoaster cycle of boom and bust; the US military was actively pursuing AI technology through its military research agency, DARPA; antitrust law was in political play; and there was significant talk of the ‘specter’ of Japan’s Ministry of International Trade and Industry (MITI), invoking a fear in the US that the Japanese ‘Fifth Generation Project’ would leapfrog American technologies. Despite the lack of expert scientific consensus, once the idea was seeded that Japan’s AI technology was a threat, the fear became a political tool used by a variety of actors, to advance agendas and justify decisions, setting us on a path to the political economy we have today. At the same time, shifting loyalties among US politicians and parties saw the rise of so-called “Atari Democrats,” while opposition to ‘high tech’ came largely from conservatives. Just as a politics of fear mobilized political action, a politics of hype was likewise a necessary component of bringing into being the conditions under which later technological and corporate power innovation was possible.

Why do states today lack capacity to regulate AI and the corporations which facilitate them, either through effective policy, antitrust or antimonopoly, legislative, or judicial strategies? This, despite growing political will to do so and despite recentralization of technology as a site for geopolitical dominance. To answer this question, this paper returns to the critical juncture of 1984. My three-fold argument is that: a) Political decisions subsequently enabled the infrastructural centralization that makes AI feasible; even while political actors could not foresee or imagine the necessary centralization because the AI technology that was being hyped and promised at the time differed in a critical way from what we have today. b) Unlike earlier infrastructure monopolies, the assemblage of technologies which make AI feasible—particularly algorithmic code, centralized cloud computing, and the vast volumes of data—are also the same technologies which enable much state capacity to carry out its functions, meaning states today are too deeply dependent and interwoven with the same critical technologies, regulatory disruption that challenges AI infrastructure would also impact state function. c) Building on the work of path dependency scholars, at a crucial moment in the 1980s, the US state diminished its own capacity for regulatory power, in service to the ideals of market fundamentalist reforms even as the same state apparatus was deeply invested—financially as well as emotionally—in achieving the technological feat of AI.

AI technology has always been tied up with goals of state capacity, slotted into existing geopolitical and ideological conflicts. Understanding this dynamic, particularly in how it interacts with infrastructural capacity of the base technologies which make AI possible, is critical to formulating effective policy interventions today – both for harnessing the innovation benefits of AI and reigning in its destabilizing impacts.

10:45
Analyzing the Balance Between Data Privacy Laws and the Need for Innovation in Sectors

ABSTRACT. The rapid advancement of artificial intelligence (AI) and big data analytics has significantly transformed various sectors, driving innovation that enhances decision-making, optimizes processes, and delivers personalized services. However, these advancements raise substantial concerns regarding data privacy and protection, prompting the enactment of robust data privacy laws globally. This research investigates the tension between the necessity for stringent data privacy regulations and the imperative to foster innovation in AI and big data analytics. The study aims to provide insights that can inform policymakers on creating frameworks that protect individual privacy while promoting technological progress. This research is guided by several key questions: 1. What are the primary challenges posed by current data privacy laws to innovation in AI and big data analytics? 2. How do different jurisdictions approach the balance between data privacy and innovation? 3. What role do ethical considerations play in shaping data privacy laws related to AI and big data? 4. How can regulatory frameworks be designed to support both data privacy and innovation simultaneously? 5. What best practices can be identified from industries that successfully navigate these challenges? To address these questions, a mixed-methods approach is employed, integrating qualitative and quantitative analyses. A comparative legal analysis examines data privacy laws across jurisdictions, focusing on the European Union’s General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), and emerging regulations in other regions. This analysis evaluates how these laws impact innovation in AI and big data analytics, considering differences in enforcement, scope, and compliance requirements. The research draws on existing literature and publicly available data from industry reports, expert analyses, and case studies involving data scientists, legal experts, and policymakers. By synthesizing these insights, the study highlights how organizations adapt to data privacy regulations while pursuing innovative initiatives. Additionally, detailed case studies of companies that have successfully navigated the intersection of data privacy and innovation reveal strategies employed, challenges faced, and outcomes of their approaches. Employing statistical models, the research assesses the impact of data privacy regulations on various innovation metrics, such as research and development spending, patent filings, and the speed of technology deployment in relevant sectors.

The findings reveal significant challenges posed by data privacy laws. While essential for safeguarding individual rights, these laws often impose considerable compliance burdens on organizations, hindering the rapid development and deployment of AI technologies. For instance, the requirement for explicit consent under GDPR complicates data collection processes, leading to delays in innovation. The lack of clarity and consistency across different legal frameworks creates uncertainty, discouraging investment in innovative solutions that rely on data. Analysis of various jurisdictions indicates that approaches to balancing data privacy and innovation vary significantly. In the EU, the GDPR emphasizes stringent data protection measures, which can inhibit certain aspects of innovation but also drive companies to adopt more robust data governance practices. In contrast, regions with less stringent regulations, such as some states in the U.S., may foster faster innovation but risk compromising individual data rights and privacy. This divergence highlights the need for a more harmonized approach that considers both privacy protection and innovation facilitation.

Moral considerations are becoming increasingly crucial in shaping data privacy laws, particularly in the context of AI. Issues such as algorithmic bias, data ownership, and informed consent are central to discussions around responsible data use. Organizations that prioritize ethical data practices not only comply with regulations but also build consumer trust, enhancing their brand reputation and competitive advantage. The integration of ethical frameworks into data privacy laws can help ensure that innovation occurs in a responsible manner that respects individual rights. The research suggests that regulatory frameworks can be designed to support both data privacy and innovation by adopting a flexible, risk-based approach. Policymakers can encourage innovation by allowing for exemptions or lighter regulatory burdens in specific contexts, such as research and development, while still ensuring adequate protections for personal data. For instance, regulatory sandboxes could be implemented, allowing companies to test new technologies in a controlled environment that balances innovation with privacy concerns.

Case studies reveal several best practices from organizations that have successfully balanced data privacy with innovation. Companies that integrate compliance into their innovation processes from the outset tend to navigate regulatory challenges more effectively. By embedding privacy considerations into the design of AI systems, organizations can mitigate risks and enhance their innovation capabilities. Moreover, organizations that communicate openly about their data practices and privacy measures build trust with consumers and stakeholders. This transparency fosters a positive reputation and facilitates smoother innovation processes by reducing public resistance and regulatory scrutiny. Engaging in ongoing dialogue with regulatory bodies can help companies shape policies that are conducive to innovation while ensuring data protection. Collaborative efforts can lead to more informed regulatory decisions that reflect the realities of technological advancements. Furthermore, organizations that prioritize ethical considerations in their AI development processes not only comply with privacy laws but also position themselves as leaders in responsible innovation. This approach can lead to greater consumer loyalty and trust, ultimately benefiting the organization’s bottom line. By incorporating ethical AI practices, organizations can innovate while ensuring respect for individual privacy and data rights.

In conclusion, the interplay between data privacy laws and the need for innovation in sectors like AI and big data analytics is complex and multifaceted. This research underscores the critical challenges and opportunities that arise at this intersection, offering insights for policymakers and industry leaders alike. By adopting flexible, ethically informed regulatory frameworks, it is possible to safeguard individual privacy while fostering an environment conducive to technological advancement. The findings contribute to the broader discourse on science and innovation policy, emphasizing the importance of collaborative efforts among stakeholders to create a balanced approach that promotes innovation without compromising data privacy. As technology continues to evolve, ongoing dialogue and adaptation of policies will be essential to navigate the challenges and opportunities that lie ahead.

11:00
Exploring an innovation policy for public AI – Ration-ales, examples and learnings

ABSTRACT. Background and research questions Since the launch of ChatGPT in 2022, artificial intelligence (AI) has been widely adopted across private, business, and public sectors. As a general-purpose technology, it drives societal and economic transformation, enabling resource efficiency, workforce support in aging societies, and sustainable systems like circular economies and smart energy grids. However, AI raises concerns about social inequality, access to infrastructure, and fair competition. Geopolitically, dependence on foreign AI infrastructures highlights the need for technological sovereignty. To address these challenges, stronger state involvement in regulating and providing AI infrastructures is discussed. However, acknowledging the social and political importance of AI does not answer the strategic and operational questions of how the public sector needs to be part in the provision and application of AI infrastructures. The design and implementation of useful public AI infrastructures represent a research case, for which experiences already exist, but which has not yet been discussed structurally. By raising the importance of public AI, we aim at initiating the required research and debate on an innovation policy mix for public AI, funded by the Mozilla Foundation. We aim to answer the following research questions: • What are the definitions, delineations, and rationales for public AI? • What kinds of public AI exist already? • What is the impact of Public AI on research and innovation? • How can innovation policy-making support public AI?

Methodology Our methodology is based on a review of the literature on digital innovation, AI compo-nents and the dimensions of public AI. To analyze existing public AI activities and related policies, we apply an explorative qualitative case study research design using two data sources: First, we use our definition of public AI to search publicly available information and documents on relevant examples of public AI in Europe and the US. Second, we conduct semi-structured interviews with experts on developing of or policymaking for public AI. Both data sources are analyzed via a qualitative coding process aiming at ex-ploring the different types of public AI in practice and how different policies and involve-ments of the public are realized. By analyzing successful applications and challenges, we generate an overview of possible implementations of public AI for strengthening re-search and innovation and on the options for policy maker to support such moves. Conceptual literature review Based on a review of the literature, three dimensions of public AI are important: Public AI needs to be trustworthy, prioritize social goals over profit maximization, and encom-passes collective decision-making processes. Therefore, we define public AI as forms of AI which are trustworthy, meaning they fulfill the conditions of privacy, fairness, trust, safety, and transparency, aim to create additional value to society as a whole – that is, they are not primarily driven by profit motives – and whose inputs (compute, algorithms, data, human resources) and access are at least partially governed, regulated, or sup-plied as public or common goods. The three dimensions are used as a filter to identify examples of public AI infrastructures or applications and to clarify how innovation policy instruments can provide directionality to the creation of public AI. Additionally, the usage of different combinations of these di-mensions allows for the categorization of conceivable variations of public AI. While each dimension can characterize different versions of AI applications and usage on its own, public AI exists only where aspects of all three meet. The societal and market dynamics of AI necessitate state involvement in Public AI, ad-dressing challenges like market concentration and transparency. Public AI counters mo-nopolistic tendencies in AI markets dominated by tech giants, promoting fair competition and societal welfare. It supports non-commercial developers and underfunded actors, fostering experimentation and innovation. Public AI can drive solutions to societal challenges, including ecological transitions, by optimizing resource use while balancing AI’s high energy consumption. Geopolitically, it strengthens technological sovereignty and societal values, fostering collaboration be-tween regions like the US and EU. International cooperation on public AI and agree-ments akin to nuclear non-proliferation may be necessary to regulate and mitigate risks from problematic AI applications while ensuring its use for the common good. Preliminary results Public AI is already being implemented through various initiatives. Estonia's bürokratt, a public-private AI system, enables 24/7 state communication for tasks like applying for child benefits or alerting households during military exercises. It exemplifies public AI by prioritizing open development and societal goals. Similarly, the EU’s GAIA-X project ad-dresses data security concerns associated with private cloud services like AWS, offering a trustworthy public alternative for secure AI development. Meanwhile, Mozilla’s Com-mon Voice project provides open datasets for AI voice training, funded by donations. These initiatives highlight diverse approaches to public AI, blending innovation, public utility, and trustworthy data use. The role of the state is critical in shaping public AI by creating conditions for its growth and integration into the AI ecosystem. Governments must guide AI applications strategi-cally, predicting potential risks and prioritizing public AI development. Clear policy sig-nals, funding for R&D, and infrastructure projects like GAIA-X highlight this direction. The state can also promote public AI through preferential procurement practices and regula-tions that ensure a level playing field and encourage competition. Addressing labor con-ditions is essential, as fair treatment of the AI workforce reflects societal values and shapes job profiles in an AI-driven economy. Additionally, the state must facilitate knowledge exchange to support the integration of AI into jobs and personal applications. These actions require a policy mix combining financial, regulatory, and informational tools. Effective orchestration ensures public AI supports innovation while addressing so-cial challenges and promoting equitable AI development. After finalizing the data collection and analysis, we aim for an overview on options for public AI applications as well as a discussion of different policy instruments to realize them.

11:15
The Impact of Offensive and Defensive Policies on U.S.-China AI Technology Decoupling

ABSTRACT. 1. Introduction The U.S.-China decoupling in artificial intelligence (AI) is reshaping the global innovation landscape, driven by geopolitical tensions. The U.S. has implemented offensive policies, including export controls, sanctions, and restrictions on talent and knowledge flows, to limit China's access to critical technologies. In response, China has adopted defensive policies focused on fostering indigenous innovation and reducing reliance on foreign technologies. AI, as a driver of economic growth and military dominance, lies at the center of this competition.

These policies disrupt cross-border technology transfer, forcing Chinese firms to explore alternative paths, often leading to inefficiencies. Simultaneously, the U.S. aims to curtail China’s technological advances, raising concerns about global innovation sustainability. This study investigates how U.S. offensive policies and China’s defensive responses have reshaped the AI innovation landscape, using patent data to measure decoupling and assess its impact on firms’ innovation patterns.

2. De-globalization and Springboard Theory Decoupling reflects the broader phenomenon of de-globalization, where countries retreat from open markets due to rising geopolitical tensions. Globalization once allowed nations like China to leverage global value chains to access advanced technologies and become leaders in sectors such as AI. However, de-globalization signals a turn toward protectionist policies (Witt, 2021; Luo, 2021).

Springboard theory (Luo & Tung, 2007) explains how emerging market multinational enterprises (EMNEs) overcome disadvantages by acquiring strategic assets abroad, such as technology and talent. However, U.S. restrictions have curtailed these efforts, limiting Chinese firms’ access to international innovation networks. This “forced decoupling” has hindered EMNEs’ ability to compete in advanced technology sectors (Han et al., 2024).

3. U.S. Offensive Policies and China’s Defensive Policies U.S. policies aim to limit China’s technological rise through export controls, such as restricting semiconductors, AI algorithms, and hardware, alongside sanctions on firms like Huawei and AI surveillance companies (McGeachy, 2019). Conversely, China’s New Generation Artificial Intelligence Development Plan (2017) seeks to establish China as a global AI leader by 2030, emphasizing domestic R&D, talent cultivation, and innovation platforms.

These diverging strategies risk creating bifurcated AI ecosystems, with distinct standards, protocols, and platforms that hinder global interoperability and increase costs for multinational firms. However, the extent to which such policies impact innovation outcomes remains underexplored.

4. Double-Loop Springboard The traditional springboard model, relying on cross-border mergers and acquisitions, faces increasing geopolitical barriers. To address these challenges, Luo and Witt (2022) proposed the double-loop springboard model, which emphasizes iterative internationalization through domestic collaborations with foreign partners. These partnerships allow firms to integrate international resources with local strengths, continuously enhancing capabilities through learning and adaptation.

This model enables firms to innovate without relying entirely on cross-border acquisitions, potentially mitigating the innovation challenges posed by decoupling. Whether this approach can resolve the innovation dilemma faced by Chinese firms amid U.S. restrictions is a key focus of this study.

5. Research Questions To what extent has AI decoupling occurred between the U.S. and China? How do offensive and defensive policies affect technological interdependencies between U.S. and Chinese firms? What innovation patterns can overcome the challenges posed by decoupling?

6.Research Methods This study uses patent citation data to measure decoupling in AI, defining a decoupling index based on the propensity of Chinese patents to cite U.S. patents and vice versa. A DiD model evaluates the effects of U.S. sanctions (e.g., post-2018) and China’s AI Development Plan (2017) on innovation performance. Indicators such as cross-border patent transfers and joint patenting are analyzed, considering firm heterogeneity, including internationalization levels.

7. Preliminary Results Preliminary results indicate that China's decoupling from the U.S. in the AI sector can be divided into several phases: the first phase occurred between 2004 and 2007, the second phase between 2012 and 2015, and the third phase between 2018 and 2022. In contrast, the U.S.'s decoupling from China primarily took place after 2021.

8. Contribution This study provides a comprehensive analysis of how U.S. and Chinese policies are reshaping the AI innovation landscape, highlighting their implications for bilateral collaboration, global competition, and supply chain restructuring. It also narrows its focus to the green-AI sector, examining whether decoupling trends in this area differ due to shared climate goals.

Unlike AI, where decoupling is prominent, the green sector may offer more room for collaboration. Initiatives like the 2021 China-U.S. Joint Glasgow Declaration on Climate Action reflect commitments to promote green innovation. This study will further explore whether green-AI innovation provides a path for continued cooperation, potentially mitigating the fragmented global innovation ecosystem driven by geopolitical tensions.

10:30-12:00 Session 14D: Responsible Innovation
Location: Room 222
10:30
Linking Data in a Shared Service Environment to Support Policy Research

ABSTRACT. The National Center for Science and Engineering Statistics (NCSES) within the U.S. National Science Foundation (NSF) is the principal source of analytical and statistical reports, data, and related publications that describe and provide insight into the nation’s science and engineering resources. NCSES collects and provides data on the science and engineering workforce; research and development (R&D); competitiveness in science, engineering, technology, and R&D; and the condition and progress of STEM education in the U. S. These data are collected as sample surveys or census.

Like other federal surveys, NCSES’ surveys are experiencing several challenges, including declining response rates, increased costs, and decreasing resources to support survey operations. Agencies across the federal government are dealing with the challenges of how to maximize available data resources, including linking data. Data linkage activities are important for compiling, evaluating, and analyzing data to obtain information that is not available in either source alone. NCSES has successfully explored linking data using traditional linking methods. Linkage activities have helped to improve the content and coverage of data collections, permit longitudinal analyses of populations and establishments, explore, and improve the quality of estimates, and ultimately provide quality data products to inform policy and research discussions that would otherwise be infeasible without adding cost or burden to the public. The Foundations for Evidence-Based Policymaking Act of 2018 (Public Law 115-435) encourages data sharing for evidence building and the CHIPS and Science Act of 2022 (section 10375) aims to inform a government-wide effort to strengthen data linkage and data access infrastructure in support of increased evidence building for the American public through a National Secure Data Service (NSDS) Demonstration project. However, privacy concerns can make accessing and linking to new data sources difficult.

One goal of the NSDS Demonstration project is to inform the development of a shared services model that would streamline and innovate data sharing and linking to enable decision-making at all levels of government and in all sectors. Privacy-preserving record linkage (PPRL) is an alternative to traditional linkage approaches that may overcome the privacy concerns raised with linking disparate sources. Some federal agencies in the U.S. have begun exploring the use of PPRL to integrate data from disparate sources to support evidence building and high-quality research while protecting privacy. However, using PPRL tools to link sensitive data create legal, ethical, and data quality concerns that need to be considered.

This talk will provide an overview of two NSDS Demonstration projects that use PPRL to integrate STEM data covered by different legal provisions. The first project aims to link data between two federal statistical agencies to inform policy discussions related to persons with disabilities. The objective of the second project is to link data from a federal statistical agency and its parent agency to inform policy discussions related to participation gaps in the STEM enterprise. The talk will focus on the development of the required data sharing agreements, legal provisions that needed to be considered, security and privacy measures that had to be addressed, software and IT infrastructure required, data quality assessments that will be integrated into the projects, distinct PPRL tools utilized for each project, and potential policy uses of the linked data. A commercial tool from HealthVerity is being used for one project and an open-source tool, ANONLINK, is being used for the other. The talk will conclude with a presentation of preliminary linkage results, lessons learned, and a summary of how these projects will inform potential future data sharing efforts between and within agencies through the establishment of a future NSDS. These projects open the door for future linkages that will support data for evidence building and high-quality R&D and STEM research by laying the foundation for future data sharing projects without relying on traditional clear text matching scenarios that may pose privacy risks.

10:45
Understanding the Alignment Between New Industry Policy and Climate Policy: Implications for International Trade, Domestic Employment, and Green Transitions

ABSTRACT. The resurgence of new industry policies in earlier industrialised countries, such as the US (Inflation Reduction Act) and Europe (EU’s Green Deal Industrial Plan), has sparked extensive debate among academics, policymakers and the public. These policies are expected to revitalise domestic manufacturing, advance technological innovation, boost domestic employment, and stimulate green growth. However, there is limited systematic evidence on their broader implications for their domestic green transitions, global innovation value chain, international trade and the social and environmental impacts on other regions. Additionally, the extent to which these new industry policies, and the increasingly shift of local protectionism align with domestic climate policies remains underexplored. Understanding this alignment opportunities and challenges is crucial for achieving the deep decarbonisation needed to address the urgent global climate crisis.

The paper therefore asks the research questions: To what extent does the new industry policy in the US align with climate policies, and what are the implications for international trade, domestic employment, and global green transitions? Are there potentials to create synergies, if so, what are the political, social and economic conditions to make it success?

This research will focus on the US electric vehicle (EV) sector, which is at the intersection of domestic industry and climate policies, as well as has wider implications for domestic green transitions and international trade relations. On one hand, it can play a significant role in accelerating its domestic decarbonisation. The EV sector has complex ecosystems that can generate positive transformative changes across different sectors (electricity, transport, digital). On the other hand, it involves complex supply systems, such as the extraction and processing of critical minerals, battery industry that are distributed globally, and thus it has wider implications for international trade and global green transitions.

The study will adopt a case study method, combining qualitative and quantitative data. It will mobilise the semi-structured interviews from policymakers, industry experts, and stakeholders involved in industry policy, climate policy and innovation policy that are relevant to shape the US EV industry development, domestic diffusion and its international trade relations. This is to understand the rationales and strategies among different stakeholders, so that to investigate the social, political and economic conditions to build synergies among them. Moreover, it will gather secondary data from government reports, industry publications, and academic studies to capture the industry dynamics, domestic EV deployment and international trade.

Theoretically, the research will contribute to the recent debate on the role of industry policy in accelerating green transitions. Specifically, it will advance the conceptual discussion on how policy mixes, include science, technology and innovation (STI) policy, industry and trade policy, and environmental and climate policies, can address grand challenges. This debate has been reflected in recent discussions on mission-oriented innovation policy (Mowery et al., 2010, Mazzucato, 2018, Wanzenböck et al., 2020) and transformative innovation policy (Weber and Rohracher, 2012, Schot and Steinmueller, 2018, Diercks et al., 2019). However, there are still limited insights on how these policies are relevant in the current resurgence of industry policy and changing geopolitics. There is a particular need to advance the empirical and theoretical understanding of how the resurgence of local protectionism will impact the global green transitions, and its implications for the innovation policies for grand challenges.

Moreover, this study will contribute to exploring the political and economic realities of implementing successful policies. Policies are not developed in a vacuum; their success relies on the capability to coordinate a wide range of actors, share information, and build trust. This is particularly true when implementing policy instruments with divergent policy goals. Coordination across different government bodies, including the traditional STI policy agency, industry and trade policy agency, and climate and environmental protection agency, is crucial in this context. Additionally, the study will explore to what extent the current trend aligns with the historical legacy of what has been conceptualised as the hidden developmental state in the US (Block and Keller, 2015).

References Block, F. L. and Keller, M. R. (2015). State of innovation: the US government's role in technology development, Routledge. Diercks, G., Larsen, H. and Steward, F. (2019). "Transformative innovation policy: Addressing variety in an emerging policy paradigm." Research Policy 48(4): 880-894. Mazzucato, M. (2018). "Mission-oriented innovation policies: challenges and opportunities." Industrial and Corporate Change 27(5): 803-815. Mowery, D. C., Nelson, R. R. and Martin, B. R. (2010). "Technology policy and global warming: Why new policy models are needed (or why putting new wine in old bottles won’t work)." Research Policy 39(8): 1011-1023. Schot, J. and Steinmueller, W. E. (2018). "Three frames for innovation policy: R&D, systems of innovation and transformative change." Research policy 47(9): 1554-1567. Wanzenböck, I., Wesseling, J. H., Frenken, K., Hekkert, M. P. and Weber, K. M. (2020). "A framework for mission-oriented innovation policy: Alternative pathways through the problem–solution space." Science and Public Policy 47(4): 474-489. Weber, K. M. and Rohracher, H. (2012). "Legitimizing research, technology and innovation policies for transformative change: Combining insights from innovation systems and multi-level perspective in a comprehensive ‘failures’ framework." Research Policy 41(6): 1037-1047.

11:00
Values and the Knowledge-governance Interface: The co-production of Digital Sequence Information governance at the Convention for Biological Diversity.

ABSTRACT. International boundary organizations facilitate exchanges between knowledge and political decision-making, often to tackle wicked problems such as climate change and biodiversity loss. By gathering stakeholder knowledge, needs, and values, and providing a forum for decisions to be negotiated, such organizations have a key role in the co-production of knowledge and social order. While the politics of knowledge production and governance have been studied in the context of understanding and responding to climate change and biodiversity loss, the governance of emerging technologies is an area that remains understudied in the international context. This is important because international organizations are increasingly tasked with addressing concerns around emerging technologies such as artificial intelligence and biotechnologies.

Lessons gleaned from scholarship on the politics of environmental issues, or a glance at the news during a Conference of Parties (COP), demonstrate that such interactions do not unfold seamlessly. Rather, international negotiations are underscored by historical power imbalances and political dynamics that lead to divergence and disagreement. These factors are deeply connected to knowledge production and use challenges, often reflecting North-South divides and associated capacity disparities. In this way, international technology governance diverges from the perspectives on innovation directionality which presuppose agreed pathways toward a common good and ignore the inherent complexity and contestation of sociotechnical objects of governance. Instead, emerging technology governance exhibits characteristics of wicked problems where problem definition, pathways to solving such problems and determining whether they have been adequately addressed are each subject to differing stakeholder needs and values. Therefore, addressing planetary crises and technology governance as ‘matters of concern’ requires academic inquiry into power-laden clashes over knowledge and values that might hinder agreement on innovation policy pathways. This entails theory-driven empirical enquiry, focusing on the underlying causes of such conflicts, and exploring potential pathways toward resolution.

This paper proposes that the Knowledge-Governance Interface (KGI) is a crucial site for investigating how divergence in technology governance is navigated in international Boundary Organizations, therefore, this paper examines a particular instance of divergence as it relates to the Convention on Biological Diversity (CBD) negotiations on benefit sharing related to biological Digital Sequence Information (DSI). Positioning the CBD as a boundary organization provides a structured lens to examine the exchange between knowledge and political decision-making during the CBD’s attempt to negotiate a global benefit-sharing regime based on digital, rather than physical, genetic resources.

The notion of fair and equitable distribution of benefits deriving from digitally enabled bioscience raises value-laden questions such as how to define the benefits of innovation, who should pay and to whom should benefits be distributed? The negotiations broach diverse normative dimensions including indigenous data sovereignty, international capacity building, and reflection on the purpose of biological Research and Innovation (R&I). This paper is grounded in a co-production perspective that knowledge is inseparable from its underlying values and exploring a boundary organization’s KGI can bring this dynamic into focus. The focus on divergence is especially pertinent because such processes are explicitly subject to negotiation, and the issue of benefit sharing has raised questions about stifling innovation, threatening international agreements like the Kunming-Montreal Global Biodiversity Framework and the upcoming World Health Organization (WHO) Pandemic Treaty. This paper adopts a structured framework, grounded in a co-production perspective to investigate the CBD process on DSI, addressing the following question:

• What is the role of the KGI in producing, mediating and overcoming knowledge/value conflicts in international Boundary Organizations as they govern emerging technologies?

The study offers empirically derived insights into the KGI’s formal and informal features, identifying knowledge/value conflicts and highlighting strategies employed by actors to navigate them, also pointing to how these are modified by power relations. This paper also offers recommendations for the process, drawing on insights and critical perspectives in science policy. It does so by operationalizing theoretical concepts derived from literature on KGIs and Boundary Organizations to produce an analytical framework consisting of six analytical categories. Specifically, analytical categories of Membership, Governance, and Boundary Object, as well as those that focus on the interplay between knowledge and decision-making, along with the impact on the broader Knowledge-Control Regime. This was used to make sense of the results of participant observations at five negotiation sites, chiefly the 15th and 16th Conferences of Parties to the CBD (December 2022 & October 2025) and the DSI Open-Ended Working Group (November 2023). This is combined with insights from 35 semi-structured interviews from a purposive, yet geographically representative sample of stakeholders, rightsholders, and government representatives involved in the CBD KGI. Structured notes and interview transcripts were deductively coded to probe the dynamics of the KGI, producing reflections on the role of the KGI, suggesting that it serves as a crucial focal point for attention as we strive to gain a deeper understanding of co-production in technology governance processes.

Empirical work provides insights into how informality functions to facilitate progress while simultaneously risking the exclusion of certain actors and perspectives. Secondly, it underscores that DSI is intentionally left undefined as a 'strategic' boundary object to enable discussions about outcomes without becoming entangled in technical intricacies. Value clashes are also highlighted, contrasting natural scientists' advocacy for open access to DSI based on their political and epistemological legitimacy with states and Indigenous Peoples and Local Communities' assertions of sovereignty over genetic resources. Additionally, the paper reflects on how the norms and ideals of science intersect with questions of fairness and equity, examining the diverse roles and strategies scientists employ to advance their positions.

The analysis extends to the broader bioscience regime, highlighting influences beyond national legislation, including anticipatory institutional responses such as shifts in R&I policy and practice, particularly in database metadata policies. The study identifies dilemmas rather than straightforward problems in international technology governance, concluding that anticipatory and reflexive processes can assist actors to better understand and navigate challenges arising from knowledge and value divergence. This study's insights offer valuable guidance for technology governance and science diplomacy, benefiting practitioners and scholars of emerging technology governance in international organizations.

11:15
Science and Innovation Policy as a Catalyst in Building Capabilities in Local Production of Medicines for Children in Zimbabwe

ABSTRACT. Covid-19 exposed an insufficient supply of medical products in many countries, hence the emphasis and consequent political will to promote local pharmaceutical manufacturing. Steered into action by the pandemic and other socio-economic imperatives, and like most governments in Africa, the government of Zimbabwe has committed to boosting local pharmaceutical production. However, the local production of medicines does not always guarantee that a nation will adequately meet its public health needs, particularly if some segments of the population are still left behind. In low- and middle-income countries, most locally produced and imported medicines on the market are more suitable for adults than children, which presents a massive challenge in achieving good treatment outcomes. Children present many public health challenges, such as rare diseases and the need for more innovative medical products. If this inequity is not addressed, it could be a major drawback in meeting the United Nations Sustainable Development Goals (SDGs) targets.

Our overarching research question was, ‘How can science and innovation policy be leveraged to meet the health needs of children by improving capabilities in the local innovation and production of medicines?' Research (1,2) has shown that local production can increase access to medicines and other health products while contributing to a country's economic development. However, to combat childhood mortality and morbidity, initiatives for access to medicines should also incorporate specific mechanisms to ensure the supply of safe and effective products for children.

We examined the importance and relevance of science policy as a catalyst for access to quality medicines for children through local production, using the UK and Zimbabwe as case study countries. We conducted semi-structured qualitative interviews with key pharmaceutical research, manufacturing, and distribution actors in Zimbabwe and the UK.

Findings suggest that manufacturers face many challenges in making medicines for children, including higher regulatory demands and complex manufacturing methods, which make the product development process riskier. The drawbacks to equitable access to medicines among populations and sub-populations require solutions embedded within policy ecosystems, among other key actors. The contention and, indeed, the evidence (3) is that science policy influences the pharmaceutical innovation landscape in a country; therefore, a well-calibrated science ecosystem can help local manufacturers be more innovative and meet the needs of sub-populations such as children. The Covid-19 pandemic exposed an ongoing ‘silent pandemic’ of distributing poor-quality medicines for children (4). However, pandemic preparedness requires us to set up more resilient systems that not only can innovate their way out of health crises but also embed inclusive and sustainable practices in their daily routines, considering all groups of society.

References 1. Mackintosh, Maureen, Mugwagwa, Julius, Banda, Geoffrey, Tibandebage, Paula, Tunguhole, Jires, Wangwe, Samuel and Karimi Njeru, Mercy. “Health-industry linkages for local health: reframing policies for African health system strengthening.” Health Policy and Planning, vol. 33, no. 4, May 2018, pp. 602–610, doi: 10.1093/heapol/czy022 2. Banda, Geoffrey, Kale, Dinar, Mackintosh, Maureen and Mugwagwa Julius. “Local manufacturing for health in Africa in the time of COVID-19.” Growth Research Programme Webinar Report, March 2021, nationalarchives.gov.uk 3. Wang, Xuefeng, Zhang, Shuo, Liu, Yuqin, Du, Jian and Huang Heng. “How pharmaceutical innovation evolves: The path from science to technological development to marketable drugs.” Technological Forecasting and Social Change, vol. 167, June 2021, ISSN 0040-1625, doi: 10.1016/j.techfore.2021.120698. 4. Thiagarajan, Kamala. “WHO investigates cough syrups after deaths of 66 children in Gambia.” BMJ, vol. 379, 14 October 2022, pp. o2472, doi: 10.1136/bmj.o2472

10:30-12:00 Session 14E: From Metrics to Policy: The Roles of Open Source Software
Location: Room 236
10:30
GitHub Innovation Graph: Metrics and Data on Open Source Software Development

ABSTRACT. This presentation is submitted as part of the thematic panel titled: "From Metrics to Policy: The Role of Open-Source Software in Science and Innovation" organized by Gizem Korkmaz with Christina Freyman as session chair.

In September of 2023, GitHub announced the launch of the GitHub Innovation Graph, an open data and insights platform on the global and local impact of software developers. In the past, measures of innovation have focused solely on resources like patents and research papers, while policymakers and researchers struggled to find reliable data on global trends in software development. GitHub created the Innovation Graph as a solution.

The Innovation Graph includes longitudinal metrics on software development for economies around the world. This open data initiative was launched with a dedicated webpage and repository, and provides quarterly data on eight metrics dating back to 2020: Git pushes, developers, organizations, repositories, languages, licenses, topics, and economy collaborators. The platform offers several data visualizations, and the repository outlines GitHub’s methodology. Data for each metric (licensed) is available to download (CC0- 1.0).

GitHub’s Innovation Graph will be useful for researchers, policymakers, and developers alike. In research commissioned by GitHub to help design the platform, consultancy Tattle found that researchers in the international development, public policy, and economics fields had interest in using GitHub data but faced many barriers while trying to obtain and use the data. The Innovation Graph aims to lower those barriers. Researchers in other fields will also benefit from convenient, aggregated data that may have previously required third-party data providers if it was available at all.

Promoting digital transformation and well-paid jobs is a key goal for many policymakers. GitHub was encouraged to see research indicating that open source contributions on GitHub were associated with more startups, increased innovation, and tens of billions of euros in GDP. It is anticipated that more readily accessible data will contribute to more (and compelling) research, and ultimately an increase in policies that foster developer opportunity, as well as an increased opportunity for someone to become a developer in the first place.

Developers will be able to see and explore a broader context for their contributions, for example, the ways in which developers collaborate across the global economy, or how a particular language or topic they may be interested in is trending in their local economy or around the world.

GitHub released the Innovation Graph as a data resource for community reuse and is excited to see how policymakers, researchers, and companies explore data trends, use the data to inform research and make beautiful visualizations, and for developers to show how their contributions relate to broader trends.

10:45
Open-Source Metrics: Attributing Credit, Measuring Impact, and Shaping Policy

ABSTRACT. This presentation is submitted as part of the thematic panel titled: "From Metrics to Policy: The Role of Open-Source Software in Science and Innovation" organized by Gizem Korkmaz with Christina Freyman as session chair.

Open-source software (OSS) has become an essential utility in knowledge production and innovation activity in both academic and business sectors for users around the globe. OSS is developed by a variety of entities and is considered a “unique scholarly activity due to the specificity and complexity of scientific computational tasks and the necessity of cooperation and transparency for research methodology.” OSS is produced and distributed with an open license allowing users and unaffiliated developers to inspect, modify, spin off, or submit improvements, and OSS has both a wide range of uses and producers and a relatively low barrier to entry for developers to share work. Moreover, OSS has special benefits for scientific discovery and innovation over commercially available software, due to the flexibility of the solutions that it provides and its transparency that facilitates reproducibility. Influential contributors to OSS can contribute heavily to the priorities and practices of scientific research when their work is widely used or built upon by other researchers. In this context, studying the global distribution, collaboration, and impact of the contributors is important to understanding the landscape of innovation in scientific research.

While the developers of OSS are located in a multitude of countries, many questions remain about who these contributors are, who are the largest contributors (countries, sectors, organizations), and how to measure their impact. As software development faces issues with accurate crediting and citation in academic spaces, this work proposes new measures for capturing the contributions and impact of developers and countries to OSS.

In bibliometrics, attributing credit to author’s publications is typically done by giving each author an equal proportion of the credit, known as fractional counting (i.e., the “1/n” rule (de Mesnard, 2017)), in which each co-author of a work (and the respective country) gets 1/n of the credit, where n represents the total amount of authors. The availability of data on OSS development (i.e., lines of code, contributors’ affiliations and locations) yields an opportunity to attribute credit to authors more accurately.

In this paper, we use data collected on Python and R packages from their respective package managers (PyPI and CRAN), and link these to their repositories on GitHub, the largest source-code hosting platform. We leverage fractional-counting methods from bibliometrics and methods used in the National Science Board’s Science & Engineering Indicators report that publishes trends and international comparisons for publications output (Science-Metrix 2021; White 2019) to measure the contribution of countries to OSS. We measure the exact contribution of each developer (author) by using weighted counting based on the lines of code added by each developer to find top contributors to OSS. We find that for both Python and R, developers from a small group of top countries account for a considerable share of code additions. Contributions from the top 10 countries, which include the United States, Germany, United Kingdom, France, and China comprise 76.1% of the total R repositories, and 66.6% of Python repositories.

We identify dependencies between OSS packages as a metric for studying the impact and influence of particular packages and their countries of origin. Software that lists other packages as a dependency indicates that it relies on or reuses the first package’s code in order to run. Influential contributors to OSS contribute heavily to the priorities and practices of scientific research when their work is widely used or built upon by other researchers. We find that packages attributed to the United States are most frequently reused by packages from Germany, Spain, Italy, Australia, and the United Kingdom based on the total dependency fractions. In parallel, United States mostly uses packages from Germany, France, and Denmark.

Finally, we use the reverse dependency fractions between each unique pair of countries to develop the fractional dependency network. In this network, a directed link from country i to country j indicates that i depends on packages of j and the weight of the edge corresponds to the sum of the dependency fractions of j from i.

In our analysis, we identify two distinct categories within the ecosystem of software packages. The first category consists of large flagship infrastructure OSS that prevail in terms of contributors and impact, serving as fundamental components extensively utilized across various applications. These packages are typically maintained by dedicated organizations. The second category encompasses smaller packages that, while more numerous, are predominantly managed by academic researchers. These smaller packages often originate from research laboratories, where they are developed and disseminated to promote transparency in research methodologies. Currently, while developers contributing to OSS are located internationally, due to the scale of the large flagship projects, most of which heavily feature US based developers, the United States is exhibits the strongest representation as a contributor to OSS.

References: Louis de Mesnard. 2017. Attributing credit to coauthors in academic publishing: The 1/n rule, parallelization, and team bonuses. European Journal of Operational Research, 260(2):778–788.

Science-Metrix. 2021. Bibliometrics Indicators for the Science and Engineering Indicators 2022. Technical Doc. https://www.science-metrix.com/wp-content/uploads/2021/10/Technical_Documentation_Bibliometrics_SEI_2022_2021-09-14.pdf

Karen White. 2019. Publications output: U.S. trends and international comparisons. Science & Engineering Indicators2020. NSB-2020-6. National Science Foundation.

11:00
From Metrics to Policy: The Role of Open-Source Software in Science and Innovation

ABSTRACT. Organizer: Gizem Korkmaz, Westat Chair: Christina Freyman, National Center for Science and Engineering Statistics, National Science Foundation

Abstract: Open-Source Software (OSS) plays a pivotal role in shaping modern science, technology, and innovation ecosystems. The rapid growth and relevance of OSS during the last two decades requires an in-depth analysis of its current role, position and its potential for the global economy. As a freely accessible and modifiable resource, OSS fosters global collaboration, drives innovation, and supports productivity in diverse sectors. However, understanding its development, contributions, and impacts remains a challenge for researchers, policymakers, and practitioners alike.

The panel will provide a comprehensive overview of OSS, highlighting its importance as a driver of innovation, competition, and economic growth in the age of AI. The panel will draw from diverse experiences and interests in science policy, national accounts, and the OSS sector and will discuss the scholastic efforts on defining and measuring OSS - a fundamental intangible asset. Topics of discussion include the measurement of supply side costs to create this software (leveraging methodology consistent with the U.S. national accounts) including the value of OSS development in the U.S. Federal Government, and the demand-side (usage) value created by OSS. Additionally, the session will explore how bibliometric methods (which are used to develop Science and Engineering Indicators) could be leveraged to answer questions about who the OSS contributors are, who the largest contributors (countries, sectors, organizations) are, and how they collaborate with each other. Finally, the session will showcase GitHub Innovation Graph as a reliable data source on software development for economies around the world, providing useful metrics for researchers, policymakers, and developers. This panel brings together four cutting-edge studies that address critical aspects of OSS measurement and its implications for science and innovation policy.

The first paper titled Open-source Software Indicators of Science and Engineering Activity presents updated methodologies and data on OSS contributions across sectors and countries, which will be featured in the National Science Board’s Science and Engineering Indicators. It highlights global collaboration patterns and the increasing role of OSS as a tool for innovation and policy development.

The second paper explores bibliometric approaches for attributing credit and measuring impact in OSS. By analyzing contributions using fractional counting methods and OSS dependency networks, it reveals the global distribution of developers, highlights the contributions of leading countries, and underscores the influence of flagship OSS projects in scientific innovation.

The third paper introduces GitHub’s Innovation Graph, an open data platform offering longitudinal metrics on global OSS activity. This initiative provides valuable insights for researchers and policymakers, enabling them to track trends in OSS development, collaboration, and its broader implications for economies worldwide. The fourth paper delves into the economic measurement of OSS using national accounting methods. It provides a novel framework for quantifying OSS investment and stock as intangible assets, offering estimates of the U.S. economy's OSS-related productivity growth and demonstrating its significance as a driver of innovation and economic development.

Together, these papers showcase the transformative potential of OSS metrics to inform science and innovation policy, enhance economic measurement, and address global challenges. This panel offers actionable insights and robust methodologies for understanding OSS’s role in fostering equitable, sustainable, and innovative ecosystems worldwide.

Paper 1: Open-source Software Indicators of Science and Engineering Activity Presenter: Carol Robbins, National Center for Science and Engineering Statistics, National Science Foundation

Paper 2: GitHub Innovation Graph: Metrics and Data on Open Source Software Development Presenter: Kevin Xu, GitHub

Paper 3: Open-Source Metrics: Attributing Credit, Measuring Impact, and Shaping Policy Presenter: Clara Boothby, National Center for Science and Engineering Statistics, National Science Foundation

Paper 4: From GitHub to GDP: A framework for measuring open source software innovation Presenter: Gizem Korkmaz, Westat

11:15
Open-source Software Indicators of Science and Engineering Activity

ABSTRACT. This presentation is submitted as part of the panel on open-source software organized by Gizem Korkmaz with Christina Freyman as session chair and submitted separately.

Research and development spending, production of skilled workers, and counts of patents and publications are frequent-used indicators of inputs and intermediate activities within each economy or region’s science and engineering (S&E) activities. However, for purposes of science and innovation policy some of the most sought-after indicators are those that quantify the outputs and impacts of S&E activity. Software in general and open-source software (OSS) are rapidly growing outputs of S&E activity that contribute to innovation and productivity. Indicators of this output are essential to understand where and how these tools are created.

This presentation updates and extends indicators developed for and presented in the National Science Board’s Science and Engineering Indicators report released in 2022. It describes the methodology for developing these indicators and provides a preview of preliminary indicators that are being prepared for the Science and Engineering Indicators report planned for release in January 2026. These indicators include the number of open-source software (OSS) repositories created by year, by U.S. economic sector (academia, government, business, non-profit) and by country, as well as network analysis of OSS collaborations across countries.

S&E activity is increasingly embedded in and documented through software tools that are freely available to use, reuse, and modify. These tools are developed by contributors from all sectors of the economy. OSS allows free access to modifiable digital tools that are used for work and leisure and constitutes a part of intangible investment with the qualities of knowledge-based public goods. Despite its widespread use, the extent and impacts of OSS on the economy and innovation remain largely unknown, which may help illustrate aspects of technology diffusion and flow that would enhance science and technology indicators.

Starting in 2022, the National Science Board’s Science and Engineering Indicators report began including measures of OSS for government, academic institutions, and businesses, and for countries where contributors reside (NSB 2022). These indicators were created using data collected from GitHub, the largest open source-code hosting platform in the world, and from the federal government’s Code.gov, which catalogs OSS projects developed and shared by government agencies.

The OSS indicators in the 2022 Science and Engineering Indicators (SEI) report described original computer software created for government activities throughout the U.S. federal government. Much of this software has the potential for reuse, both inside and outside of government. The policy of the U.S. federal government has been to support the sharing of software developed by and for the federal government through its Federal Source Code Policy, which provides a framework for government code to be released and reused through OSS licensing (OMB 2016). This policy allows software created for narrow federal purposes to be reused elsewhere within the federal government, multiplying its value to the government, and outside of the federal government, further extending its impact.

The data collected and analyzed showed increasing activity of federal departments and agencies in the sharing of computer software repositories with an Open Source Initiative (OSI)–approved license on the GitHub platform. Repositories contain all files and folders associated with a specific OSS project and can be owned by individual entities or shared across multiple entities. As described in the 2022 report, while this measure based on GitHub is not comprehensive (federal agencies may share through separate platforms), it shows that from 2010 to 2019, the 10 most active federal department and agency participants in OSS sharing contributed to over 15,000 distinct OSS repositories. Representative government-contributed OSS projects include Department of Energy (DOE)’s Raven, code for performing risk analysis of nuclear reactor systems, and DOE’s Qball, code for performing molecular dynamics to compute the electronic structure of matter. Not surprisingly, because DOE and NASA were early adopters of OSS, these two agencies have the largest number of repositories among federal agencies. In 2009, only Department of Commerce ( DOC), DOE, and the National Aeronautics and Space Administration (NASA) used open-source platforms to share software with other users; by 2019, 21 agencies did so. The presentation proposed for the Atlanta conference will provide updates to these indicators to 2023.

Additionally, the presentation will update global patterns of collaboration in the development of OSS based on those shared on the GitHub platform. GitHub provides tools to collect user attribute and activity data that can be analyzed to assess OSS collaborations across countries. Collaboration is defined in terms of pairs of individuals that contribute code to a common repository. OSS collaboration networks show international relationships in knowledge creation and transfer. Analysis of global OSS collaboration patterns reveal that OSS developers in the United States with projects on GitHub, collaborate most frequently with developers in Germany, followed by the United Kingdom and India. This collaboration pattern differs from that of peer-reviewed publications in which authors from the United States are most often collaborating with Chinese coauthors (NSB 2023).

References: National Science Board (NSB), National Science Foundation. 2022. Invention, Knowledge Transfer, and Innovation. Science and Engineering Indicators 2022. NSB-2022-4. Alexandria, VA. Available at https://ncses.nsf.gov/pubs/nsb20224/

National Science Board, National Science Foundation. 2023. Publications Output: U.S. Trends and International Comparisons. Science and Engineering Indicators 2024. NSB-2023-33. Alexandria, VA. Available at https://ncses.nsf.gov/pubs/nsb202333/.

Office of Management and Budget. 2016. https://obamawhitehouse.archives.gov/sites/default/files/omb/memoranda/2016/m_16_21.pdf

11:30
From GitHub to GDP: A framework for measuring open source software innovation

ABSTRACT. This presentation is submitted as part of the thematic panel titled: "From Metrics to Policy: The Role of Open-Source Software in Science and Innovation" organized by Gizem Korkmaz with Christina Freyman as session chair.

Open source software (OSS) is developed, maintained, and used both within the business sector and outside of it through the contribution of independent developers and people from universities, government research institutions, and nonprofits. Because OSS can be studied, modified, and distributed freely, typically with only minor restrictions (St. Laurent, 2004), this software can undergo fairly rapid innovation and be repurposed across various industries (Raymond, 1999). The Open Source Initiative (OSI) certifies licenses that comply with the principles of open source; any software with an OSI-approved license is considered open source. Notable examples of OSS include the Linux operating system, Apache server software, and programming languages R and Python.

Many OSS projects create long-term tools that are products of public spending. Often, these tools have been developed outside of the business sector and subsequently used within it. While limited, existing estimates of publicly funded OSS suggest its magnitude is significant. For example, by 2017, Apache was estimated to hold the largest market share of active websites (44.5%). The Apache server, developed with federal and state funds at the National Center for Supercomputing Applications at the University of Illinois, is estimated to be equivalent to between 1.3% and 8.7% of the stock of prepackaged software currently accounted for in U.S. private fixed investment (Greenstein and Nagle, 2014). The scale and use of these modifiable software tools highlight an aspect of technology flow that needs to be captured in market measures.

Better measurement supports better policy, and as an intangible asset created and used across the economy, OSS developed by applicable sectors should be fully accounted for in gross domestic product (GDP). Economic measurement that is integrated with the overall accounting of goods and services produced in the economy provides the basis for understanding the impact of OSS on sector-level productivity as well as overall productivity. Understanding the role that each sector plays in developing, funding, and promoting OSS can help inform public policy that supports innovation and economic growth. While the potential for software innovation to support economic and productivity growth and transform various sectors of the economy is indisputable, measurement of innovation in software has been limited, particularly for OSS. Despite its widespread use, there is no standardized methodology for measuring the scope and impact of this fundamental intangible asset In this paper, we provide an approach to measure it.

This study presents a framework to measure the value of OSS using data collected from GitHub, the largest platform in the world with over 100 million developers. The data include over 7.6 million repositories where software is developed, stored, and managed. We collect information about contributors and development activity such as code changes and license detail. By adopting a cost estimation model from software engineering, we develop a methodology to generate estimates of investment in OSS that are consistent with the U.S. national accounting methods used for measuring software investment. We generate annual estimates of current and inflation-adjusted investment as well as the net stock of OSS for the 2009–2019 period. Our estimates show that the U.S. investment in 2019 was $37.8 billion with a current-cost net stock of $74.3 billion.

This study makes several important contributions It fills a measurement gap by providing a novel approach to measuring investment in OSS. It thus contributes to the economic measurement literature by developing a methodology consistent with those of national accounts that measure software investment in the United States. We adopt a cost model from software engineering and apply it to GitHub data. More specifically, our methodology draws from the current economic measurement of own-account software — software created using internal resources as opposed to purchased or outsourced. Based on the development cost, we extend this cost measure to the measurement of OSS as a useful asset used in production.

The paper provides estimates of U.S. annual OSS investment based on the prices prevailing during the period the investment took place (nominal) and adjusted for inflation and quality changes (real) which allows for comparisons across time. We also provide the corresponding net stock estimates (i.e., the cumulative value of the asset) for the 2009–2019 period. Although the totality of OSS is unknown, we believe our estimates account for a significant portion of OSS development and make a significant contribution to a growing literature on measuring OSS innovation poorly captured by traditional approaches. These OSS measures complement existing science and technology indicators on peer-reviewed publications and patents that are calculated from databases covering scientific articles and patent documents. In addition, with these estimates, we aim to contribute to the understanding of productivity and economic growth, both within and outside the business sector, and encourage further research into the importance and contribution of OSS in the digital economy.

10:30-12:00 Session 14F: Green Technologies
Chair:
Location: Room 330
10:30
Prospects for green transition and technological catch-up in Brazil

ABSTRACT. 1. Introduction Climate change has motivated numerous national and multilateral institutions to set targets for reducing greenhouse gas emissions. Initiatives have emerged across various domains, such as the Green New Deal and the Big Push for Sustainability, advocated by the Economic Commission for Latin America and the Caribbean (ECLAC). However, empirical evidence suggests that achieving this transformation may be more difficult than anticipated (Vogel & Hickel, 2023). In this context, technological change can play a crucial role in promoting the transition. Some scholars argue that the green transition can open windows of opportunity for developing countries to catch-up with developed countries in terms of technology (Lema et al., 2020; Lema & Rabellotti, 2023). However, to get advantage of this transition, developing countries have to be able not only to incorporate new technologies, but also to generate, adapt, and access them (Caravella et al., 2021). Corrocher et al. (2021) show that only a limited number of latecomer countries have managed to gain important positions in green technologies. The nature of green technologies itself poses a challenge for developing countries. Green technologies involve the development of new products or practices (whether processes, organizational models or marketing strategies) that minimize environmental risks, pollution or the negative impacts of resource use (Kemp & Pearson, 2007). They articulate a higher variety of technological components, have a higher degree of novelty, and a larger and more pervasive impact on subsequent developments, compared to non-green ones (Barbieri et al., 2020; Dechezleprêtre et al., 2017). They build both on green and non-green previous accumulated knowledge and are closely related to the country’s previous innovative capacity (Hascic et al., 2010; Montresor & Quatraro, 2020). Further empirical evidence support that collaboration among environmentally innovative organization is greater (Cainelli et al., 2015; De Marchi, 2012). Consequently, countries with more robust national and regional innovation systems are more likely to engage in green technologies (Arranz et al., 2019). This paper aims to analyze Brazil’s potential to technologically catch-up in a context of green transition. To do so, we resort to patent data and network indexes to analyze Brazil’s local green capabilities endowment, capacity to access external knowledge and absorptive capacity, as well as the extent and integration of the national green innovation system. Brazil is an interesting case, because empirical evidence is not conclusive regarding the country’s capacity to advance in green transition and developing greener technological paths. Although the country has the most mature innovation system in South America, being able to increasingly access the global innovation knowledge, its technological competencies still lag behind other emerging economies (Bianchi et al., 2023; Britto et al., 2021). Particularly regarding green technologies, UNCTAD depicts Brazil as one of the best-positioned emerging economies for green transition (UNITED NATIONS (UNCTAD)., 2023). However, empirical evidence indicates that Brazil’s potential to incorporate distinct green technologies varies across distinct sectors (de Melo et al., 2021; de Paulo & Porto, 2018; Françoso et al., 2024).

2. Data and methodology In order to conduct our analysis, we collected patent data from the United States Patent and Trademark Office (USPTO) covering the period from 2003 to 2022. To identify patents related to green technologies, we adopted the 4-digit Y02 Cooperative Patent Classification (CPC) code encompassing "Technologies or Applications for Mitigation or Adaptation Against Climate Change". Based on this classification, we identified a total of 123,574 patents. We assigned patents to countries according to the inventor’s location, and divided the data into four non-overlapping periods: 2003-2007, 2008-2012, 2013-2017 and 2018-2022. Therefore, it is possible not only to see an aggregate picture, but also to identify tendencies and changes over time. Next, we used patents to design international and intranational networks. With this data, we aim to measure three distinct dimensions: 1. Local knowledge endowment; 2. Absorptive capacity and potential access to international knowledge; 3. Integration and extent of the green national innovation system. To approximate the knowledge endowment, we employ the measure of Relative Technological Advantage (RTA) to capture the local technological endowment. To measure absorptive capacity and the potential to access international knowledge, we follow Bianchi et al. (2023) and Françoso & Hiratuka (2020), and use two distinct network measures calculated based on the international network: the normalized degree centrality and coreness. To evaluate the integration and extent of the Brazilian green national innovation system, we used three measures applied to the intranational network: number of components, size of the largest component and transitivity. After calculating the indexes, we proceed to assess Brazil’s position in terms of its potential for technological catch-up, comparing its indexes with G7 countries and successful catch-up countries in green technologies (Corrocher et al., 2021; Montenegro et al., 2021).

3. Preliminary results Preliminary results indicate that Brazil has increasingly accumulated capabilities in certain green technologies, particularly those related to the production or processing of goods, as well as wastewater treatment and waste management. In terms of international network centrality, Brazil is close to Taiwan, in the bottom of the rank. Brazil's centrality remains significantly lower than other countries, including latecomers like China and South Korea, indicating limited absorptive capacity. Throughout most periods, Brazil holds an intermediate position in the network: it is neither centrally located nor on the periphery. Once again, its position mirrors Taiwan's but it is very far from Korea, China and the G7 countries, which consistently occupy the center of the network. Brazil presents the lowest number of components, suggesting that green technologies research is concentrated within a few groups. This is not the case for other countries. On the other hand, the size of Brazil's largest component shows an inconsistent trajectory, alternating between high and low values—a pattern observed across all countries in the sample. In terms of transitivity, Brazil maintained the highest rank among the investigated countries. This finding suggests that knowledge flows relatively easily within Brazil; however, this result may be influenced by the country's generally low number of inventors.

10:45
Firms’ Knowledge Disclosure: Website, Publication, and Patent Data

ABSTRACT. This submission can be related to Thematic Panel: "AI in Science, Technology, and Innovation Policy Studies – Paradigm Shift or Necessary Compromise?"

---

The disclosures of knowledge resulting from private Research and Development (R&D) efforts have received increasing attention from innovation studies scholars and practitioners. While these disclosures can lead to unintended knowledge spillovers, hence hindering a firm from fully capturing the benefits of R&D, they can also help the firm to build its reputation, gain access to external knowledge and resources, and establish intellectual property rights (e.g., Alexy et al., 2013; Arora et al., 2018; Hicks, 1995; Rotolo et al., 2022). In this vein, research has provided evidence of the positive impact of knowledge disclosures on firms’ innovative performance (e.g., Blind et al., 2022; Jong & Slavova, 2014; Simeth & Cincera, 2015).

Research has, however, mostly focused on disclosure channels—scientific publications and patents—which are subject to well-established norms and institutional logic. While patents are granted for inventions that meet patent offices’ patentability criteria (e.g., in the case of the European Patent Office, an invention is patentable when it is “susceptible of industrial application,” “new,” and involves an “inventive step”), scientific publications are expected to undergo a process of academic scrutiny that aims to assess rigor and contribution to knowledge.

On the contrary, knowledge disclosures reported in sources that are much less subject to norms and institutional logic, such as websites, remain largely unexplored. For example, a firm can disclose knowledge on its website that is not necessarily novel, susceptible of industrial application, or provides a major step forward to the understanding of certain phenomena. These disclosures represent, however, signals that can still exert an impact on the firm’s ability to attract resources, potential partners, shareholders, investment, etc.

The importance of increasing our understanding of firms’ knowledge disclosures on their websites becomes even more evident in light of recent findings on how firm-related text data can provide valuable insights into firms’ R&D activities and strategies while also complementing traditional data sources (e.g., Bellstam et al., 2021; Gatchev et al., 2022). Although some research efforts have gone into examining firms’ website data (Axenbeck & Breithaupt, 2021; Gök et al., 2015; Nathan & Rosso, 2022; Youtie et al., 2012), there is a lack of understanding of the extent to which firms disclose knowledge through this channel and how these disclosures overlap with or diverge from other channels of disclosure.

We aim to fill this gap by addressing the following research question: Do firms disclose scientific and technological knowledge on their websites that is not disclosed through publications and patents?

Building on signaling theory (Spence, 1973), we examine firms’ behavior in disclosing knowledge on their websites as a ‘signal’ to various stakeholders and compare website disclosures with those made in scientific publications and patents. Our analysis relies on a sample of UK firms across a representative sample of sectors that we observe over the period of 2011-2022. For each firm, we longitudinally collected text data from their websites, scientific publications, and patents—longitudinal website data were collected by using WAYBACKMACHINE (https://web.archive.org/). We then relied on Natural Language Processing (NLP) and Large Language Models (LLMs) to map firms’ scientific and technological knowledge content as reported on their websites, publications, and patents.

In doing so, we built on established classification frameworks, namely the OpenAlex’s “Topic Classification” for publications and the Cooperative Patent Classification (CPC) for patents. Given that website content is typically geared towards a broader audience, LLMs are likely to capture the scientific and technical nuances of website disclosures, which may not be as explicit as those in publications and patents. This led us to a panel of 1,800 firm-year observations.

Preliminary analyses reveal that about 48% of the firm-year observations (N = 863) disclose content on websites that is not disclosed in publications and patents. We found that in about 69% of the firm-year observations within this subsample, 50% or more of the scientific and technological topics disclosed by firms on their websites were not disclosed in their publications. Similarly, in about 76% of the firm-year observations within the subsample, 50% or more of the scientific and technological topics disclosed by firms on their websites were not disclosed in their patents. These findings seem to suggest that websites represent a complementary channel of knowledge disclosure for a number of firms.

A longitudinal analysis also revealed that the above shares of firm-year observations declined from 2011 to 2020 when website topics are compared with patent topics, while it remained stable in the case of the website-publication topics comparison. We also performed a qualitative analysis of the content of a sample of firm-year observations to validate findings and provide insight on the type of disclosures. In doing so, we adopted an explanatory mixed-method research approach (Creswell & Creswell, 2023).

Our study contributes to the growing literature on text-based indicators of the innovation process by providing insights into the use of different channels of knowledge disclosure. We expect the findings to generate implications for policymakers and practitioners interested in promoting innovation and knowledge sharing among firms and across sectors.

11:00
Technology Needs Assessment Study Project for Climate Change in India- Case Study

ABSTRACT. Technology Needs Assessment (TNA) is a critical process for the nations, businesses, and institutions to identify and evaluate the technological tools, systems, and resources they require to achieve their objectives. The importance of conducting a technology needs assessment lies in several key factors: such as aligning technology with national goals, optimizing resource allocation, Improving efficiency and productivity, enhancing decision-making, mitigating risk, preparedness for the future, promoting innovation and for benchmarking and continuous improvement. Key components involved in TNA include stakeholder analysis, current technology evaluation, needs identification, and solution prioritization. The primary goals are to improve efficiency, drive innovation, and ensure sustainability. The outcome of an effective TNA is a strategic roadmap that guides future technology investments and implementations.

The Technology Needs Assessment (TNA) project in India, a key component of the National Mission on Strategic Knowledge for Climate Change (NMSKCC),Government of India represents a forward-looking and strategic response by India to the challenges of climate change. This initiative demonstrates the country's efforts to integrate technological innovation with ecological sustainability, reflecting the evolving nature of technology in an ever-changing global context.

Technology Information, Forecasting and Assessment Council (TIFAC), is an autonomous body under the Department of Science and Technology (DST) has been serving as a technology think tank for the Government of India since 1988. Being at the forefront of technology foresight, assessment, and innovation support, TIFAC is also spearheading climate change associated activities and is in close association with the Ministry of Environment, Forest and Climate Change, Government of India.

TIFAC in association with DST is implementing Technology Needs Assessment Study for select sectors to address both mitigation and adaptation perspective of Climate Change. During the initial period, the project underwent a critical preparatory phase laying essential groundwork for a wide-ranging technology needs assessment across various sectors critical to India's development and environmental health. Following this foundational stage, the project made significant advances by identifying and evaluating cutting-edge technologies in key sectors viz. Coal and Energy, Renewable Energy, Transport, Industrial Processes and Product Use (IPPU), Waste Management, Agriculture, Water Resources, Forestry, Urban Habitat, and Health. Each sector represents an opportunity for technological interventions to mitigate climate change effects and propel economic advancement.

The TNA project is more than a mere assessment of technology; it serves as a blueprint for India's journey towards minimizing greenhouse gas emissions and increasing energy efficiency across diverse sectors. In alignment with the ambitious goals laid out by Hon'ble Prime Minister Shri Narendra Modi, including the "Panchamrita" strategy proclaimed at CoP26 in Glasgow, the project steers India's technological path in line with its climate commitments.

The TNA project stands as a crucial instrument in addressing the economic repercussions of climate change on key sectors, affecting the livelihoods and well-being of the populace. It provides a holistic methodological approach that integrates climate change actions within the ambit of sustainable development, aiming to balance economic growth, social welfare, and environmental preservation while curbing greenhouse gas emissions and bolstering climate resilience.

Challenges in Technology Needs Assessment Despite its benefits, TNA faces several challenges. Common obstacles include resistance to change, inadequate stakeholder involvement, and limited resources. Mitigation strategies involve fostering a culture of innovation, ensuring inclusive stakeholder participation, and securing sufficient funding. Additionally, conducting regular reviews and updates to the TNA process can address evolving needs and technological advancements. Lessons learned from overcoming these challenges highlight the importance of flexibility, continuous learning, and strong leadership in successful TNA implementation. Future Trends in Technology Assessment The landscape of technology assessment is continually evolving, driven by emerging technologies such as artificial intelligence, blockchain, and the Internet of Things (IoT). Predicted industry demands point towards increased automation, enhanced cybersecurity, and greater emphasis on data analytics. To accommodate these trends, future assessments must prioritize adaptability, cross-disciplinary collaboration, and proactive identification of technological opportunities. Recommendations for future assessments include integrating advanced analytical tools, fostering partnerships with technology experts, and maintaining a forward-looking approach to anticipate and leverage technological changes effectively.

This paper would attempt to encapsulate the methodology, technology trends prioritized, challenges and suggestive best practices, as insights from the Case study carrying out the Technology Needs Assessment Study Project.