View: session overviewtalk overview
| 10:50 | An LLM-Augmented Knowledge Graph Framework for Quantitative Intelligence Analysis: A Case Study of All-Solid-State Lithium Batteries PRESENTER: Jinxin Dong ABSTRACT. Extracting deep technical intelligence from massive, unstructured scientific literature remains a challenging task for traditional bibliometric methods and topic models. This study proposes a framework coupling large language models (DeepSeek V3.2 and Qwen3-max) with domain knowledge graphs to transform unstructured text into computable knowledge. Knowledge is extracted via a constrained strategy to mitigate generative hallucinations, and the output is structured into fine-grained entity-relation triplets to support quantitative reasoning. The framework provides three representative analytical approaches: identifying latent technology pathways via standardized residuals, tracking milestone performance breakthroughs through heterogeneous indicator normalization, and profiling institutional competitiveness using integrated scale-quality metrics. It is validated through a case study on all-solid-state lithium batteries, achieving a mean score of 4.50/5.00 in expert evaluation. This methodology represents a systematic, automated, and scalable route to obtaining deep semantic insights and strategic forecasts from scientific literature. |
| 11:10 | Understanding Technology Evolution from a Problem Perspective: Integrating TRIZ with Patent Analysis PRESENTER: Yue Li ABSTRACT. Understanding technology evolution from the perspective of problems can reveal how core challenges shift over time and what drives technological progress. However, existing methods face two limitations: problem representations based on SAO structures or sentence-level extraction lack standardization for cross-document alignment, and citation networks reflect document relationships rather than problem inheritance. This study proposes a framework combining TRIZ contradiction parameters with hypergraph structures to analyze technology evolution. Large language models are employed to extract contradiction parameter pairs from patent texts, transforming technical problems into standardized representations. A dynamic hypergraph is then constructed with patents as hyperedges and contradiction components as nodes, preserving the multi-element relationships within each invention. Temporal analysis of the hypergraph identifies core problems across different periods and tracks shifts in problem focus. The framework is applied to the fuel cell vehicle domain, revealing how problem focus has migrated across different technological stages. |
| 11:30 | Patterns of Inventive Problem Solving in Patents: TRIZ Mapping, Functional Creativity, and Patent Value PRESENTER: Joe Waterstraat ABSTRACT. This research-in-progress examines how inventive strategy patterns reflected in patents relate to functional creativity and how both relate to patent value. We use TRIZ as an ex post taxonomy to map patented technical solutions to the 40 inventive principles and operationalize functional creativity using the Creative Solution Diagnosis Scale (CSDS), which emphasizes usefulness and effectiveness in addition to novelty. Because manual TRIZ and CSDS coding does not scale to large patent corpora, we implement an LLM-based pipeline that summarizes the novel inventive steps and then produces TRIZ principle scores and CSDS item ratings from this summary. We test associations between (i) the number of TRIZ principles in a patent and functional creativity, (ii) the number of TRIZ principles and patent value indicators, and (iii) functional creativity and patent value indicators, using regression models with firm and issue-year fixed effects and patent-level controls. Patent value is measured using three commonly used proxies: a market-based value measure (KPSS), forward citations in a fixed five-year window, and renewals at 3.5 years. Preliminary results show that patents mapped to a larger number of TRIZ principles tend to receive higher functional creativity scores. Functional creativity is positively associated with all three value proxies when included alongside TRIZ breadth. In contrast, the conditional association between TRIZ breadth and value differs across proxies once functional creativity is included: it is positive for forward citations, statistically indistinguishable from zero for KPSS, and negative for renewals. These findings are associational and motivate additional validation and robustness work, including prompt/model sensitivity checks, targeted human-coded validation, and analyses that move beyond principle counts to principle categories and creativity dimensions. |
| 11:50 | The Evolution of the Dynamics of AI Innovation in Small Emerging Economies PRESENTER: Hung-Chi Chang ABSTRACT. How do innovation ecosystems evolve in response to global technological shifts and institutional adaptation? How does the high-tech sector in small emerging economies engage with the global innovation ecosystem? This study investigates the transformation of the artificial intelligence (AI) innovation ecosystem in Taiwan, focusing on how firms engage in knowledge exchange, follow international regulations, and adapt to industrial transformation. Adopting a multi-level perspective (MLP) as the framework, we analyze the interactions between technological niches, regimes, and landscapes. Combining bibliometrics, scientometric mapping, social network analysis (SNA), and system dynamics analysis, we trace knowledge flows, collaboration patterns, and policy feedback loops in Taiwan’s AI sector from 2000 to 2023. Our findings show that Taiwan’s AI innovation ecosystem is closely connected to global knowledge networks, particularly in semiconductor and biomedical applications. Taiwanese firms rely on their hardware strengths but seek strategic partnerships with global technology leaders to address weaknesses in software and algorithms. However, despite strong global connections, Taiwan’s AI sector faces challenges in establishing a distinctive niche beyond its traditional hardware dominance. This study contributes to the literature on innovation systems by providing empirical evidence of industrial transformation. The findings offer insights into how firms in small emerging economies adapt to global AI regulations and participate in knowledge markets. Our mixed-methods approach offers a new way to analyze the co-evolution of technology, institutions, and firm strategies. This study extends the MLP framework by integrating qualitative system dynamics to map policy-innovation feedback loops in small open economies. The findings provide suggestions for policymakers to enhance innovation capabilities and for firms to develop competitive AI business models. |
| 10:50 | Node Dynamics and Structural Trees in Technological Evolution: Diffusion Patterns in Term Networks PRESENTER: Mingli Ding ABSTRACT. Technological evolution tends to move toward increasing complexity, driven by internal contradictions within technological systems. However, innovation is not a linear process but a nonlinear one involving the interaction of multiple factors, exhibiting dynamic changes over time. Therefore, this study proposes an analytical method integrating node dynamics and spanning trees to model the diffusion patterns of evolving terminology. Specifically, we construct a temporal network of terms based on co-occurrence relationships, extract an update tree of emerging terms, and introduce the Hawkes process to model the diffusion dynamics of terms and their associations over time. Furthermore, it analyses the future evolutionary trajectories of technical terms. Empirical analysis is conducted using 172,362 patents and 2,684,847 papers from the AI field, constructing patent and paper term networks, respectively. The results demonstrate that the proposed method performs robustly, particularly in node and edge prediction tasks within patent term networks. Compared with baseline methods, it achieves improvements of 0.008 and 0.0123 in Brier scores, and 0.3901 and 0.2593 in AUPRC for node and edge prediction, respectively. Overall, this study provides a novel analytical perspective for understanding technological evolution and offers valuable insights for technology foresight and innovation breakthroughs. |
| 11:10 | Identification of Potential Knowledge Diffusion Pathways from Science to Technology ABSTRACT. The efficient transformation of scientific knowledge into technological innovation represents a critical challenge in the national innovation system. However, existing citation-based and text-mining approaches inadequately capture tacit knowledge connections between science and technology. This study proposes a novel framework integrating semantic representation and network structure to identify potential knowledge diffusion pathways. We develop a Dual-Channel Heterogeneous Graph Transformer (DC-HGT) model combined with cross-domain interaction mechanisms to align scientific and technological nodes within a unified semantic space. Subsequently, a link prediction model is employed to identify latent S-T associations that are not revealed by explicit citations or co-occurrence. A multidimensional index comprising topic relevance, tacit knowledge relevance, and transformation probability is established for the systematic evaluation of potential diffusion paths. An empirical analysis in the stem cell domain identified 25 scientific topics, 29 technological topics, and 14 high-potential knowledge diffusion paths. The results reveal diversified cross-disciplinary knowledge flow patterns. Notably, technological topic 26 (stem cell-mediated anti-tumor therapy) receives knowledge contributions from 11 distinct scientific topics, while scientific topic 7 (mechanisms of microenvironment-regulated cell behavior) radiates to 4 different technological topics, demonstrating both the broad radiating capacity of core basic research and the knowledge convergence effect in key application areas. This framework can provide actionable decision-making support for research funding agencies and innovation policymakers. |
| 11:30 | A Fine-Grained Main Path Analysis Method for Tracing Knowledge Flow in Citation Networks PRESENTER: Yuan Wang ABSTRACT. Main Path Analysis (MPA) is a widely used method for tracing knowledge flows in citation networks. Conventional MPA approaches treat documents as vertices and overlook the substantive content within the documents, which restricts a deeper understanding of knowledge evolution and reduces interpretability. To overcome this limitation, we propose a deep learning–augmented, entity-centered MPA framework that supports entity-based path discovery and enhances interpretability. Our method follows a four-step pipeline: (1) data preprocessing to structure the citation network; (2) knowledge entity extraction using a BERT–BiLSTM–CRF model; (3) extraction of multiple main paths via a semantic-aware main path method; and (4) identification of strongly associated entity pairs between citing and cited documents using an attention model with a three-level masking mechanism, which filters out irrelevant entity pairs and enables drilling down from document-level to entity-level representations, thereby generating fine-grained main paths. We validate the proposed approach through extensive experiments on a patent dataset from the thin-film head domain in computer hardware. Results demonstrate that our method reveals finer-grained knowledge flows across key subfields and improves the interpretability of candidate paths |
| 11:50 | Forecasting Conceptual Diffusion in Science PRESENTER: Thomas Maillart ABSTRACT. Understanding and anticipating scientific change requires models that distinguish between endogenous consolidation and exogenous diffusion of scientific concepts. Using the quantum computing subtree of concepts in OpenAlex, we construct a temporally resolved concept co-occurrence network and track each concept pair through its upstream citation lineage and downstream diffusion. We train LightGBM models on distributional and diversity-aware features to predict four outcomes: endogenous reinforcement, exogenous diffusion, their ratio, and diffusion entropy. After controlling for overall publication growth of the scientific body, endogenous reinforcement proves largely unpredictable. In contrast, exogenous diffusion and entropy are strongly predictable (R2 up to 0.78) and are driven by upstream heterogeneity, citation breadth, and distributional dispersion, as shown by SHAP analyses. Case studies reveal that sharp entropy increases coincide with the opening of new conceptual frontiers, while entropy collapses signal technological convergence or paradigm displacement. These results demonstrate that conceptual diffusion is governed by stable structural regularities embedded in semantic and citation environments. By identifying early diversity-based signals of cross-domain uptake, the approach provides a scalable foundation for anticipatory scientometrics, technology foresight, and innovation-oriented policy analysis in rapidly evolving research fields. |
| 13:40 | MT-KDF: A Multi-Teacher Knowledge Distillation Framework with Embedding Enhancement and Multi-Scale Feature Fusion for Chinese Scientific Entity Recognition PRESENTER: Chunjiang Liu ABSTRACT. To address challenges in Chinese scientific literature, such as severe term nesting, fuzzy boundaries, and long-tail uneven class distributions, this paper proposes a Multi-Teacher Knowledge Distillation Framework (MT-KDF) for entity recognition, integrating multi-scale feature enhancement and Low-Rank Adaptation (LoRA) fine-tuning. To overcome the bottleneck of single models in feature extraction, we first construct a specialized teacher architecture integrating "Character-Lexicon" dual embedding and an Adaptive Temporal Convolutional Network (ATCN). This architecture is designed to capture differentiated local dependency features—ranging from extremely short abbreviations to long descriptive terms—through parallel channels, thereby strengthening the perception of domain terminology. On this basis, a "Single-Entity Expert" strategy is employed to independently train multiple teacher networks, which generate consistent global supervision signals through multi-channel knowledge aggregation to guide student model training. Finally, LoRA technology and a hybrid loss function are introduced in the student model phase to achieve efficient transfer of domain knowledge while freezing the majority of backbone parameters. Experimental results on the SciCN and CMeEE datasets demonstrate that the proposed method achieves highly competitive performance in complex contexts via expert knowledge aggregation. Notably, MT-KDF surpasses mainstream pre-trained baselines and Large Language Models (LLMs) in key metrics while maintaining a superior trade-off between accuracy and computational efficiency through lightweight fine-tuning. |
| 13:53 | Tracking Interdisciplinary Knowledge Evolution with LLM-Augmented Semantic BERTopic: A Multidimensional Framework for Complex Research Domain Analysis PRESENTER: Hanbai Wang ABSTRACT. This study presents a multidimensional framework for tracking interdisciplinary knowledge evolution by integrating Large Language Model (LLM)-augmented semantic BERTopic modeling with advanced temporal modeling techniques. While conventional approaches, such as bibliometrics and tech mining, often struggle to capture the dynamic and evolving nature of interdisciplinary relationships due to their inability to handle large-scale data and complex semantic structures, our framework systematically models evolution through three replicable phases: (1) comprehensive data collection and structured information extraction; (2) LLM-augmented semantic representation and hierarchical organization of research topics through automated keyword generation, topic interpretation, and distance-based clustering; (3) extension of temporal pattern analysis with topic strength calculations and network metrics. By leveraging LLM to enhance semantic granularity, we reduce manual effort while improving the detection of interdisciplinary knowledge diffusion. Applying this multidimensional framework to Embodied AI (31,618 publications, 2000–2024), we identified eight interconnected research directions and quantified a four-stage evolutionary trajectory characterized by increasing interdisciplinary collaboration. These findings reveal the evolving nature of Embodied AI research, highlighting the growing importance of interdisciplinary collaboration in advancing this field. This work provides actionable insights for researchers and funding agencies, equipping them with a powerful new lens to visualize, understand, and navigate intricate interdisciplinary research landscapes. |
| 14:06 | Tesla Shock and Technological Structure Upgrading in China's New Energy Vehicle Industry: A LLM-based Semantic Embedding Analysis PRESENTER: Tan Yifan ABSTRACT. Industrial upgrading is reflected not only in the growth of innovation output but also in the deepening of innovative activities from the product end toward foundational technology domains. However, traditional IPC-based patent metrics lack the semantic granularity to determine whether a patent is closer to basic research or to product application. This paper combines large language model semantic embedding with causal inference to propose a new method for measuring this technology distance. Using product-related patents from China's patent-intensive product certification platform as a "product-end" reference, we employ the BGE large language model to convert each city patent into a text vector and compute its average similarity to the Top-k nearest product-end patents, thereby quantifying how close a city's innovation is to product application. A higher similarity indicates innovation closer to product application; a lower similarity indicates innovation biased toward basic research. Applying this method to China's new energy vehicle (NEV) industry, we use a difference-in-differences design with the 2018 approval of Tesla's wholly-owned manufacturing as the exogenous shock. We find that: (1) from 2000 to 2023, the technology distance between urban NEV innovation and the product end gradually widened, reflecting the industry's upgrading path from "product-oriented imitation" toward "foundational technology breakthrough"; (2) Tesla's entry significantly widened this technology distance, indicating that external competitive pressure promoted industrial upgrading; (3) this effect operates primarily through technological deepening in core component sectors and R&D upgrading among enterprise innovators. These findings offer new micro-level evidence on how external competition promotes industrial upgrading and provide a replicable methodological paradigm for measuring patent technology distance. |
| 14:19 | LLM-based SAO Semantic Extraction and Automatic Technology Topic Identification Method PRESENTER: Xuemei Yu ABSTRACT. This paper proposes an automated approach for technology theme identification based on patent semantic analysis and large language models (LLMs). Using patent texts as the research object, the method systematically extracts Subject–Action–Object (SAO) structures to represent technical knowledge in a structured form, enabling the automatic identification of technology themes. The approach consists of three key stages: first, patent texts are preprocessed and semantically parsed using LLMs to generate SAO triplets, capturing technical entities, functional actions, and target objects; second, the extracted SAOs are vectorized and clustered using unsupervised methods to identify semantically and functionally related technical units, abstracting from microscopic SAOs to mesoscopic technology themes; finally, LLMs are employed to summarize the semantics of each cluster and generate interpretable technology labels. This end-to-end workflow achieves structured representation of technical knowledge and intelligent clustering analysis. Experimental results demonstrate that the proposed method achieves high precision, coverage, and semantic interpretability, providing an effective tool for technology intelligence mining and technological evolution analysis. |
| 14:32 | LLM-Enhanced Graph Mining for Adaptive Technology Tree Building PRESENTER: Hui Zhang ABSTRACT. Technology trees serve as essential tools for systematically structuring technical knowledge and visualizing hierarchical relationships, identifying development gaps, and supporting strategic R&D decision-making. However, as technological systems grow increasingly complex, traditional expert-driven methods for constructing these trees suffer from inefficiency, subjectivity, and a lack of timely updates. To address these challenges, this study proposes an adaptive framework that integrates large language models (LLMs) with graph mining techniques. The proposed methodology proceeds in three main steps. First, the study employs LLM-based few-shot prompting to accurately extract technical entities and hierarchical relations from unstructured patent texts. Second, it utilizes SBERT embeddings to compute semantic similarities, constructing a network that captures latent associations among entities. Third, an improved community clustering algorithm combined with constrained Depth-First Search transforms this network into a structured, layered technology tree. An empirical case study on Electric Vehicle (EV) charging demonstrates the feasibility of our approach. The proposed automated method for technology tree construction significantly enhances objectivity and optimizes efficiency, and provides a novel perspective for technology forecasting and strategic planning. |
| 14:45 | From Predicting the Future to Sensing the Future: AI-Enabled Paradigm Shift in Technology Foresight ABSTRACT. Technology foresight has long served as a key analytical tool for supporting science, technology, and innovation (STI) policy and strategic decision-making. Traditional foresight practices are predominantly grounded in a prediction-oriented paradigm, which assumes that future technological trajectories can be reasonably inferred through trend extrapolation, expert judgment, and structured forecasting methods. However, increasing technological uncertainty, non-linearity, and cross-domain convergence have progressively challenged the effectiveness of this paradigm, particularly in identifying early-stage and weak signals of potentially transformative technologies. Recent advances in artificial intelligence (AI), especially in large-scale data processing, pattern recognition, and knowledge representation, have introduced new possibilities for technology foresight. Rather than merely enhancing existing tools, AI is reshaping the underlying logic of foresight practices. This paper argues that AI is enabling a paradigm shift in technology foresight—from an emphasis on predicting future outcomes toward sensing emerging possibilities under conditions of deep uncertainty. Drawing on a methodological analysis of traditional foresight approaches and recent developments in AI-enabled foresight practices, this study conceptualizes the core features of this paradigm shift. Specifically, it examines changes in the object of foresight (from dominant trends to weak signals and possibility spaces), the operational mode (from episodic forecasting to continuous sensing), and the human–machine relationship (from expert-centered judgment to human–AI collaboration). The paper further clarifies that the proposed paradigm shift is methodological rather than Kuhnian in nature, reflecting a reconfiguration of foresight logic and practice rather than a scientific revolution. By articulating the conceptual foundations of AI-enabled “sensing-oriented” technology foresight, this study contributes to the theoretical understanding of foresight method evolution and provides a framework for analyzing emerging foresight practices in highly uncertain technological environments. |
| 13:40 | Scientific Relatedness Constrains Novelty in Global Sustainability Science PRESENTER: Mingze Zhang ABSTRACT. Scientific novelty plays a crucial role in expanding the frontiers of knowledge, driving innovation, and responding to urgent sustainability challenges. This research explores how a nation’s scientific relatedness, defined as the proximity of its existing knowledge base to a specific Sustainable Development Goal (SDG) domain, relates to the further production of novel scientific discoveries. Analyzing more than 4 million SDG-related publications from 165 countries between 2000 and 2023, we reveal a dual effect of scientific relatedness on future knowledge production. While it strongly promotes subsequent research productivity in a domain, it also suppresses scientific novelty. This “scientific relatedness penalty” is especially marked in the Global South. These findings make contributions by clarifying how nations can balance the innovation paradox, the inherent tension between leveraging existing competencies and pursuing novel directions. For policy makers, our outcomes highlight the need to enhance global scientific connectivity to support transformative sustainability science. |
| 14:00 | State of the Art of Novelty Indicators PRESENTER: Deyun Yin ABSTRACT. Novelty is a core value in scientific research, and its measurement has been of wide scholarly and practical interest. Numerous bibliometric indicators for novelty have been proposed, some shared in open repositories, which have facilitated empirical investigation into scientific novelty. However, there remains a fundamental limitation. That is, we have insufficient knowledge as to what these indicators truly measure because of limited efforts for examining measurement validity. This study addresses this gap by evaluating a range of novelty indicators, employing various operationalisation strategies, against self-reported novelty assessments obtained from our originally designed questionnaires covering multiple novelty dimensions. Our analyses examining the correlation between the self-assessed scores and bibliometric indicators offer several insights. First, while most indicators detect some aspects of novelty, a single indicator may not sufficiently capture all forms of novelty. Second, a cross-disciplinary comparison reveals that indicators’ performance varies across disciplines – some indicators demonstrate consistent correlations across disciplines while others show correlations only in limited disciplines. Third, as employing language models in novelty evaluation has become common, we compare the static language model and the contextual large language model, finding that indicators based on the latter outperforms those on the former. Fourth, we examine ex-post indicators, which require post-publication data (e.g., forward citation), and find that they offer no clear advantages over ex-ante indicators in detecting novelty. These findings highlight both the potential and limitations of existing indicators and offer implications for the future development and application of novelty indicators. |
| 14:20 | Interpretable Forecasting of Scientific Breakthroughs from Concept Network Dynamics PRESENTER: Thomas Maillart ABSTRACT. We introduce an interpretable machine-learning algorithmic approach that forecasts emerging links between research concepts by modelling how OpenAlex concept networks in quantum computing evolved from 1990 to 2023. Using 59 semantic and topological features, a two-stage LightGBM model predicts both the formation and growth of concept pairs (AUC ≈ 0.95). Its regression performance remains stable: RMSLE increases from 0.45 at one year to 0.6 at five years, meaning that predicted link strengths stay within roughly a factor of 2 despite exponential growth. Feature attribution shows that structural factors, particularly Adamic–Adar similarity and degree-based Hadamard measures, consistently drive forecasting accuracy. These patterns suggest that breakthroughs tend to emerge in tightly connected sub-networks where ideas recombine rapidly. Two expert-validated examples, quantum annealing and AI-enabled quantum architectures, illustrate how the model captures technological convergence as anticipated by experts. Building on these findings, we outline a three-layer decision architecture that connects automated detection, expert translation, and institutional integration, to support evidence-based research strategy and policy. The framework offers a reproducible foundation for transforming large-scale knowledge data into actionable intelligence for science and technology governance. |
| 14:40 | The Price of “Standing on the Shoulders of Giants”: Unpacking the Novelty–Impact Paradox in Science PRESENTER: Fan Zhang ABSTRACT. "Standing on the shoulders of giants" has long been recognized as a fundamental pathway for scientific innovation. However, a systematic analysis of 1.64 million papers quantifies the true cost of citing classic literature, revealing a pervasive Novelty–Impact Paradox: papers that cite highly innovative classics exhibit significantly higher linguistic novelty, whereas their explicit impact (measured by citation counts) suffers a significant innovation penalty (a decline of approximately 15.9%).This paradox is subject to triple moderation by disciplinary context, the type of cited classics, and the stage of knowledge evolution. We find that the paradox is more pronounced in the natural sciences or in the same-discipline citation scenarios. The paradox onset point—the time at which the paradox first manifests—appears earlier for practical classics than for theoretical classics. Furthermore, traditional citation metrics primarily capture impact during the knowledge creation phase, creating a time blind spot regarding knowledge flows during the diffusion and maturity phases, thereby exacerbating the paradox. By introducing "novelty reuse" as a proxy for implicit impact, our study demonstrates that while citing classics may reduce explicit impact, it significantly fosters the implicit dissemination of knowledge and conceptual inheritance, thus partially mitigating the conflict between novelty and impact.These findings challenge monolithic and myopic impact evaluation paradigms. They provide a theoretical and empirical basis for constructing a more comprehensive and time-sensitive scientific evaluation system, while also offering strategic insights for researchers on how to critically inherit classics to achieve sustainable innovation in practice. |
| 13:40 | From Restrictions to Opportunities: A Text-Based Framework for Cross-Industry Technology Opportunity Analysis PRESENTER: Yutong Chuang ABSTRACT. This study proposes a comprehensive text-based framework for cross-industry technology opportunity analysis to transform technological restrictions into opportunities. Compared with existing research on technology opportunity analysis, we improve upon the limitations of traditional patent-based approaches that predominantly confine analysis within single technological domains and fail to capture cross-industry application potential under restriction conditions. In this study, we introduce an integrated analytical framework that systematically processes patent data and policy texts, extracts and expands technical vocabularies from CCL restrictions and SEI classifications through semantic modeling, and applies weak signal detection methodology combining text dissimilarity analysis with KIM-KEM composite charting for weak signal terms. We then demonstrate how this framework works by examining China's new energy vehicle industry under U.S. export controls. Through cross-industry technology mapping, we identified systematic correlations between restricted technologies and industry applications. Using weak signal detection, we derived key terms including "airflow," "green," and "cool control" from outlier patent documents. Through SAOX semantic analysis, we extracted multiple semantic structures revealing cross-industry applications such as intelligent thermal management systems, green energy ecosystem integration, and intelligent thermal control systems. We built a comprehensive opportunity transformation framework based on analysis results to explore promising cross-industry opportunities for organizations facing technological restrictions. We contributed to the related research field by enabling organizations to leverage existing technological foundations to systematically identify cross-industry opportunities beyond traditional single-domain analytical constraints. |
| 14:00 | How Disruptive Traits of Highly Cited Patents Affect Technology Diffusion: Evidence from USPTO Patents PRESENTER: Hongye Zhao ABSTRACT. In corporate innovation decisions, the selection of foundational knowledge often involves a critical trade-off: prioritize highly cited patents widely recognized in the industry, or invest in disruptive technologies with breakthrough potential? However, existing studies tend to treat highly cited patents as a homogeneous group, ignoring how their disruptive traits affect technology dissemination differently. To fill this gap, this study employs a knowledge flow perspective to examine how the subsequent technology diffusion performance of patents differs after absorbing different types of foundational knowledge. Based on 8,032,090 utility patents granted by the United States Patent and Trademark Office (USPTO) from 1980 to 2024, we employ the PatentSBERTa patent text embedding model to assess semantic similarity between patents, define highly cited patents using five-year forward citations, and identify disruptive patents via the disruption index (DI). The study draws three main conclusions: First, the semantic similarity between patents and the technologies they cite has steadily declined, which confirms that technological innovation is shifting from a single knowledge base to diversified knowledge integration. Second, subsequent patents citing highly cited patents exhibit a long-term and stable advantage in technological diffusion. Third, disruptive patents and the subsequent patents citing them have lower semantic similarity and hinder the technological diffusion of subsequent patents. Supported by large-scale data, this study offers a new empirical view on patent technology diffusion mechanisms. It also provides key practical guidance for enterprises to balance innovation certainty and breakthrough potential, and improve their knowledge base choices. |
| 14:20 | A Research on Construction of Technological Innovation Space Based on Fitness Landscape PRESENTER: Yue Zhang ABSTRACT. To reveal the evolutionary mechanisms of innovation pathways in complex technological systems, this study introduces a fitness landscape approach to construct a model of technological innovation space. Using a patent-based NK modeling framework, the research identifies core technological elements and their interactions to reconstruct a performance “topography” of the design space. A local hill-climbing algorithm is applied to simulate the evolutionary process of technology combinations, analyzing issues of path dependence and local optima. Results indicate that the technological innovation space exhibits a distinct multi-peaked and highly rugged structure, and system complexity strongly influences the direction and efficiency of innovation searches. The findings provide a computable analytical tool for innovation forecasting, technology mapping, and intelligence monitoring, offering decision-making insights for complex industrial systems. |
| 14:40 | A BERT-Driven Approach for Identifying Patent Risks in Export Control Contexts PRESENTER: Xiaoliang Zhang ABSTRACT. In the context of escalating global technological competition and tightening export control regulations, the efficient identification of controlled technologies within patent documents has become a critical challenge for enterprises engaged in cross-border technology transfer and global R&D collaboration. This study proposes an automated screening framework for quantum technology patent export control based on contrastive learning and semantic similarity calculation. The framework integrates a multi-stage data construction pipeline combining vector-based preliminary filtering, large language model-based relevance verification, and manual annotation to generate a high-quality patent–ECCN (Export Control Classification Number) paired dataset. Two contrastive learning approaches—CosineSimilarityLoss and MultipleNegativesRankingLoss—are fine-tuned on a Sentence-BERT encoder to learn unified semantic representations bridging patent texts and regulatory documents. Experimental results demonstrate that CosineSimilarityLoss significantly outperforms MultipleNegativesRankingLoss, achieving an AUC of 0.747 and a peak F1-score of 0.6467 at an optimal threshold of 0.35, with a Recall of 72.3% and Accuracy of 68.1%. The model exhibits a clear precision-recall trade-off, enabling flexible threshold selection based on application requirements. Although precision remains an area for improvement, the proposed method provides an efficient, interpretable, and scalable solution for preliminary export control screening, substantially reducing manual review workload while maintaining high recall. This study contributes a validated framework for semantic matching between technical and regulatory texts, with implications for intelligent compliance systems and technology risk management. |
| 15:15 | High-Value Patent Recognition Model: Exploring Data Augmentation and Causal Mechanisms PRESENTER: Wei Cheng ABSTRACT. This study aims to address the issue of current high-value patent identification models overly relying on historical statistical data and lacking causal explanatory power, in order to enhance the effectiveness and interpretability of high-value patent identification. Therefore, a high-value patent recognition model based on counterfactual data augmentation is proposed. This model constructs heterogeneous graphs containing multiple types of nodes, captures relationships between nodes through ensemble graph attention networks (GAT) and graph convolutional networks (GCN), and uses adversarial neural networks to generate counterfactual samples to perturb edge features. It combines incremental learning strategies to evaluate patent value and reveal causal mechanisms. The experimental results show that the model achieves accuracy, precision, recall, and F1 score of 84.27%, 84.51%, 85.17%, and 84.84%, respectively, on patent datasets in the field of artificial intelligence, which is superior to the baseline model. This validates its effectiveness and provides a new research perspective for high-value patent recognition tasks. |
| 15:35 | Identifying Features and Evolutionary Mechanisms of Sleeping Beauty Patents in Semiconductor Field Based on a Hybrid Vector Method PRESENTER: Ningze Ma ABSTRACT. In technology-intensive fields, the “instant burst” and “delayed recognition” of patent citations constitute two distinct pathways for value realization. To decode the drivers of this divergence, this study constructs a comparative framework based on deep semantic understanding. We employ a multi-dimensional screening algorithm to isolate Sleeping Beauties (SBs) and Early Highly Cited (EHC) patents, utilizing a hybrid vector model—fusing Qwen3 embeddings with TF-IDF features—integrated with BERTopic to map evolutionary paths accurately. Empirical results reveal that SBs emerge on average 3.08 years prior to topic explosion, confirming their nature as "premature" innovations. Organizationally, SBs predominantly originate from equipment manufacturers securing next-generation reserves with broad claims, whereas EHC patents are driven by manufacturing giants addressing immediate bottlenecks. Ultimately, we identify the temporal misalignment between technological supply and the industrial ecosystem as the fundamental mechanism inducing dormancy, offering a micro-semantic perspective on non-linear value realization in high-tech industries. |
| 15:55 | Research on a Method for Identifying High-Value Patents by Integrating Text Semantic Information PRESENTER: Yaqi Nie ABSTRACT. Identifying high-value patents is of great significance for optimizing innovation resource allocation. To address the limitations in existing patent valuation methods—such as highly subjective indicator systems, insufficient utilization of textual features, and lack of feature integration—this study proposes a novel approach for identifying high-value patents by integrating semantic information with quantitative indicators. By constructing a patent value assessment indicator system covering four dimensions (technical, strategic, legal, and economic), combined with modern-BERT text feature extraction and UMAP dimensionality reduction, this study innovatively integrates quantitative patent indicators with textual semantic features and compares various models from traditional machine learning and deep learning. The results show that deep learning models perform better, with the proposed CNN-BiLSTM fusion model significantly outperforming traditional methods in terms of accuracy, F1 score, and AUC score. Ablation experiments further validate the effectiveness of the multi-feature fusion strategy, revealing the differentiated contributions of quantitative indicators and textual features to model performance. This study provides a scientific and effective method for high-value patent identification, which contributes to improving patent commercialization efficiency and promoting a win-win situation between technological innovation and economic benefits. |
| 16:15 | Research on Value Assessment and Recommendation of Lapsed Patents Based on Heterogeneous Graph Attention Network PRESENTER: Kai Song ABSTRACT. Abstract: In the rapidly evolving field of smart elderly care health monitoring, a large number of patents lapse due to non-technical factors. However, the technical schemes contained within these patents still possess potential value for reuse. Traditional patent valuation methods often lack applicability to lapsed patents, and existing recommendation methods struggle to integrate patent value with enterprise technical preferences. To address these issues, this paper proposes a method for the quality assessment and precise recommendation of lapsed patents based on a Heterogeneous Graph Attention Network.This study focuses on the domain of smart elderly care health monitoring. It constructs a structured patent dataset and achieves technical topic identification based on deep semantic representation and topic modeling. On this basis, the study introduces indices for technological mutation and technological outlierness from the complementary perspectives of technological evolution and distribution. These indices are used to quantitatively assess the intrinsic technical value of lapsed patents, screening them to form a candidate set of high-value lapsed patents. Furthermore, the paper integrates enterprises, patents, and technical topics into a unified modeling framework to construct a Value-Aware Heterogeneous Graph Attention Network (VA-HGAT). A weighted loss mechanism is employed to guide the model to focus on high-value patent features, enabling the effective learning of technical matching scores between enterprises and lapsed patents.In the recommendation stage, a multi-factor fusion recommendation score is constructed by combining the technical matching degree, intrinsic patent value, and time decay factors. The recommendation results are then structurally optimized through technical topic quadrant analysis. Case study results indicate that this method can stably identify high-value lapsed patents that are highly compatible with the target enterprise's technological layout. The recommendation results demonstrate strong performance in terms of both technical relevance and value rationality. This research provides an interpretable and scalable methodological path for the systematic mining and precise allocation of lapsed patents as low-cost technical resources, offering valuable reference for enterprise technology decision-making and technology transfer practices. |
| 15:15 | Expected and Optimal Shares of International Co-publications PRESENTER: Rainer Frietsch ABSTRACT. International collaboration in science and innovation is crucial for scientific progress and the efficient use of scientific knowledge. Studies indicate that internationally co-authored papers are cited more frequently than national or non-collaborative publications. This phenomenon can be attributed to a larger readership and the higher scientific relevance of international projects. The distribution of international co-publications varies significantly among countries: larger nations tend to have lower shares, while smaller countries often achieve higher proportions. This discrepancy arises because larger countries are more likely to find national partners with complementary competencies, whereas smaller nations typically maintain more specialized science and innovation systems. Recent evidence, however, shows that the US has a lower share of international co-publications compared to many other countries. This study aims to investigate the expected and optimal levels of international co-publications for various countries, based on their size, profile, and research orientation. It assesses how effectively countries utilize their scientific resources to achieve visibility and excellence. Three methodological approaches are employed: a fixed-effects panel regression model to estimate expected co-publications, a shift-share analysis to differentiate global and national trends, and a Data Development Analysis (DEA) to evaluate the efficiency of countries in using their resources for scientific output. Results from the regression analysis indicate that despite controlling for size, excellence, and research orientation, countries such as the US, China, South Korea, and Japan exhibit low levels of international co-publication activity, whereas countries like Singapore, Austria, France, and the UK are highly engaged in international collaboration. The shift-share analysis reveals negative trends for the US even after accounting for global effects. Finally, the DEA assesses the efficiency of research systems using internationally comparable indicators, such as the number of researchers and R&D expenditures. This part of the analysis is still in progress, and results are not yet available. Overall, the paper aims to enhance the understanding of the dynamics of international scientific collaboration and evaluate how different countries optimize their resource utilization to maximize scientific output performance. |
| 15:35 | Tracing the Interactions between Academic, Technological and Policy Impacts of Research Outputs PRESENTER: Beibei Sun ABSTRACT. Research impact is increasingly recognized as a multidimensional phenomenon encompassing academic, technological, and policy dimensions, yet their dynamic interrelationships remain insufficiently explored. This study examines the interactions among these three types of impact using longitudinal data from the bioinformatics field. Based on 3,921 publications that generated academic, technological, and policy impacts, we construct an unbalanced panel dataset linking annual paper citations, patent citations, and policy document citations from 1991 to 2019. A Panel Vector Autoregression (PVAR) model estimated via system Generalized Method of Moments is employed, complemented by Granger causality tests, impulse response analysis, and variance decomposition. The results reveal strong self-reinforcing effects within each impact dimension and a bidirectional causal relationship between academic and policy impacts, indicating mutual reinforcement over time. In contrast, technological impact follows a largely independent trajectory. These findings highlight the importance of interaction-oriented approaches to research impact assessment. |
| 15:55 | Research on the Diffusion Mechanism of Core Innovative Ideas within a Domain from a Technology Convergence Perspective ABSTRACT. Technology convergence is a key driving force for promoting breakthrough innovations and discovering technological opportunities. This study, based on the core innovation ideas within the field, identifies potential technology convergence opportunities and constructs an analysis framework that integrates multi-source information, combining core innovation ideas with sub-domains, IPC classifications, and text features. On this basis, using a technology integration prediction method based on graph convolutional networks and bidirectional long short-term memory networks, it provides a new perspective for related research. The model is verified using patent data from the medical robot field. The results show that the development of this field is highly dependent on the deep integration of robot technology, computer network control technology, medical surgical technology, and artificial intelligence technology, demonstrating a significant interdisciplinary technology convergence diffusion mechanism. This study fills the gap in existing research on technology convergence by considering cross-domain technology convergence networks while ignoring the technology convergence mechanisms within the domain, providing theoretical references for innovation practice. |
| 16:15 | Empirical Orthogonal Function Based Analysis of Domain Knowledge Collaborative Evolution: Revealing Knowledge Teleconnection Phenomena PRESENTER: Kaiwen Shi ABSTRACT. Against the backdrop of scientific and technological development increasingly relying on knowledge linkage and restructuring, traditional science and technology management paradigms based on proximity assumptions struggle to reveal the cross-disciplinary, non-adjacent knowledge interaction mechanisms underlying major original innovations. They particularly overlook the implicit, long-range, and time-lagged co-evolutionary patterns within knowledge ecosystems. To address this, this study introduces the concept of knowledge teleconnection, aiming to develop a domain-based coevolutionary analysis method grounded in empirical orthogonal function (EOF) techniques. This approach seeks to detect and interpret synergistic relationships among knowledge units that are semantically distant within disciplinary spaces yet exhibit statistically correlated developmental trajectories. Using artificial intelligence as a case study, this research analyzes literature data from Web of Science spanning 2000–2024. By extracting key spatio-temporal modes through empirical orthogonal function analysis, it identifies distant correlation pairs within the field—such as between deep learning and traditional heuristic algorithms, or large language models and conventional machine learning—and constructs a distant correlation network to reveal AI's knowledge co-evolutionary structure. The findings not only theoretically expand analytical paradigms for knowledge co-evolution, offering new perspectives on understanding complex dynamic behaviors in innovation systems, but also provide actionable data-driven support for forward-looking science and technology management planning, interdisciplinary cultivation, and disruptive innovation anticipation. This advances the paradigm shift from static knowledge structure mapping to dynamic knowledge system comprehension. |
| 15:15 | Configurational Effects of Technology Convergence on Industrial Innovation from a Multidimensional Theoretical Perspective:A Tree-Model-Based Fuzzy-Set Qualitative Comparative Analysis PRESENTER: Jiapeng Han ABSTRACT. In the context of accelerating global technological and industrial transformation, this study aims to systematically reveal the multifaceted mechanisms through which technology convergence drives industrial innovation. From the perspective of technology convergence, this study develops a set of conditions shaping industrial innovation drawing on three theoretical lenses: micro-level technology recombination theory, meso-level co-evolution theory, and macro-level convergence chain theory. Following the innovation ecosystem framework, the analysis first employs the SHAP (SHapley Additive exPlanations) model to identify core variables. Subsequently, fsQCA (fuzzy-set Qualitative Comparative Analysis) is applied to derive configuration pathways, and the XGBoost (eXtreme Gradient Boosting) model is used to construct decision-tree pathways incorporating all variables. Finally, the two approaches are integrated to identify specific patterns through which technology convergence promotes industrial innovation. An empirical analysis of the biopharmaceutical industry demonstrates that the configuration pathways derived from fsQCA are largely consistent with the decision-tree results produced by XGBoost. Four pathways are identified as enhancing industrial innovation performance: institutional coordination, endogenous network evolution, industry-driven innovation, and demand-oriented convergence. These findings indicate that industrial innovation is jointly driven by technology–industry coupling as the dominant factor, collaboration among innovation actors under varying external conditions, and the combined effects of structural and diffusion convergence within and across networks. |
| 15:35 | Technology fusion forecasting via temporal hypergraph link prediction——with application in cybersecurity PRESENTER: Wei Hu ABSTRACT. Technology fusion is a primary driver of modern innovation. Forecasting its emergence is strategically vital for identifying nascent technological opportunities and anticipating industrial transformations. However, capturing the higher-order dependencies and complex temporal evolution inherent in this process—which fundamentally differ from dyadic combinations—presents a unique challenge. To address this challenge, we construct a framework based on temporal hypergraph link prediction (THLP). Fine-grained technology domains are modeled as nodes, and their fusion events—the co-occurrence of multiple technologies in a single patent—are modeled as time-evolving hyperedges. We systematically designed and extracted a multi-dimensional feature framework. Next, a temporal hypergraph deep neural network (THG-DNN) model is then proposed to predict the formation of true and potential future fusions. Then, we validate our framework against multiple baselines using patent data from a field of critical and high-frequency areas of emerging technology fusion. The superior predictive performance confirms the necessity of modeling higher-order dependencies within technology combinations. Beyond prediction, our framework reveals stable key drivers extracted from a hypergraph for technology fusion, including strong evidence for path dependency and cognitive proximity. Furthermore, it also uncovers U-shaped effects of recombination mode and inverted U-shaped effects of cooperation network structure, providing deeper insights into the fusion process. This research offers a powerful analytical tool for managers, policymakers, and investors to identify emerging technological opportunities and formulate effective and verifiable innovation strategies. |
| 15:55 | Analyzing the Landscape of Technology Convergence in AI4S Based on a "Technology-Scenario" Two-Mode Network PRESENTER: Jiaze Wang ABSTRACT. The field of AI for Science (AI4S) has garnered increasing global attention, making the analysis of its technological landscape crucial for strategic decision-making and innovation. A distinguishing feature of AI4S is the deep convergence between general AI technologies and specific scientific scenarios. Traditional single-mode analysis often fails to capture this dualistic coupling. To address this, this paper proposes a novel framework constructing a "Technology-Scenario" Two-Mode Network to systematically analyze the landscape of technology convergence in AI4S. Methodologically, the study first employs Large Language Models (LLMs) to identify AI4S patents from the database and extract fine-grained technical entities and application scenarios from titles and abstracts of patents, constructing a high-quality dataset. Subsequently, a quantitative evaluation framework is established to address three core objectives: (1) investigating the differential adaptation of diverse AI technologies to distinct scientific problems; (2) identifying the critical nexus bridging heterogeneous disciplines amidst diverse technological convergence; and (3) characterizing the dominant fusion paradigms emerging within the AI4S domain through Louvain community detection algorithms, which cluster tightly coupled technology-scenario combinations into functional patterns. The empirical results reveal a hierarchical "multi-path" convergence pattern in the current AI4S landscape. On one hand, general technologies such as GNN and CNN serve as broad infrastructure, enabling diverse disciplines like biomedicine and materials science. On the other hand, specialized technologies incorporating domain knowledge, such as Physics-Informed Neural Networks (PINN), demonstrate high adaptation in solving distinct hard problems in physics and mathematics. These findings provide a granular view of how AI technologies differentially penetrate various scientific paradigms. This study offers a new methodological tool for visualizing the complex interactions in cross-disciplinary innovation. It provides empirical evidence for research institutions and enterprises to identify high-potential "Technology-Scenario" combinations and optimize their R&D layouts. Future work will extend this framework to include dynamic trend analysis, further elucidating the evolutionary trajectory of AI4S. |
| 16:15 | Pattern Identification and Trajectory Characterization of Technology Convergence from a Dynamic Evolutionary Perspective PRESENTER: Yifei Yu ABSTRACT. Technology convergence is widely recognized as a key mechanism of technological innovation, yet existing studies often rely on static indicators and macro-level classifications, overlooking the dynamic evolution and heterogeneity of specific technology combinations. This study develops a dynamic, two-dimensional framework to characterize technology convergence from the perspectives of structural tightness and outcome balance. Based on patent co-classification data at the IPC main group level, we construct technology combinations as micro-level units of convergence. We measure structural tightness using co-occurrence intensity and structural adhesion, and assess outcome balance through semantic balance and citation-based knowledge inflows. To trace temporal dynamics, we incorporate a time-decay weighted index and apply polynomial trend fitting to identify convergence trajectories. An empirical analysis of additive manufacturing patents from 1986 to 2024 reveals substantial heterogeneity in convergence dynamics and identifies four convergence types: Loose–Balanced, Tight–Balanced, Loose–Unbalanced, and Tight–Unbalanced. Overall, this study provides a fine-grained perspective on technology convergence and offers methodological support for technology foresight and innovation policy. |
| 16:40 | Research on Potential Disruptive Technological Measurement Based on Multi-source data —— Taking the Field of Artificial Intelligence as an Example PRESENTER: Wenfang Tian ABSTRACT. With the development and progress of the times, disruptive technology research and development have become crucial starting point for technological innovation and are among the country's major strategic tasks. Accurately identifying and measuring potential disruptive technologies is of great significance for the country, enterprises and universities to seize the technological development opportunities. To achieve this, we start by identifying a list of potentially disruptive technologies through expert analysis, utilizing improved Support Vector Machine(SVM)algorithms and an Latent Dirichlet Allocation(LDA) model. Furthermore, we construct an indicator system for measuring disruptive technology across four dimensions. In the end, we validate the effectiveness and applicability of our model in the field of artificial intelligence. In this article, the systematic fusion model of potential disruptive technology measurement we proposed offers distinct advantages in identifying these technologies, and the research idea from disruptive technology "identification" to "measurement" enables a more precise discovery of disruptive technologies within the subject area. |
| 16:53 | Identification of Opportunities and Risks for Technological Innovation Based on Explainable Artificial Intelligence Model PRESENTER: Yingqi Xu ABSTRACT. This article discusses the approach of employing explainable artificial intelligence (XAI) framework to dissect opportunities and risks for technological innovation, that is, patentability of technological topics. As samples for analysis, 10579 patents related to lithium-ion battery research applied from 2021 to 2023 are collected. The Derwent titles of patents are processed by biterm topic model (BTM) to accurately extract technology topics from short texts and to reduce dimensions of classification models’ inputs. Ten distinct types of machine learning and deep learning algorithms have been deployed to categorize patent documents. The main driving features, that is, technology topics crucial for authorization and rejection of patent applications, are derived from the model with optimal classification outcomes using Shapley additive explanations (SHAP). Opportunities for technological innovation are defined as these main driving features for authorization, while risks for technological innovation are defined as these main driving features for rejection. Several opportunities and risks for technological innovation in field of lithium-ion battery are discovered by the proposed approach. |
| 17:06 | Emerging Topics Detection Based on the Multilayer Semantic Network: an “Issues-Solutions-Effects” framework PRESENTER: Jingqian Gong ABSTRACT. Emerging topics detection plays a significant role in various research fields in the era of rapidly evolving innovation. However, traditional methods like co-word and co-citation analysis often lack content-level granularity and interpretability. To overcome these limitations, this study extracts Subject-Action-Object (SAO) triples from article abstracts and constructs a multilayer semantic network incorporating an Issues-Solutions-Effects framework. This study extracts multidimensional features and measurement indicators of multilayer semantic networks from macro, meso, and micro perspectives, and employs the Analytic Hierarchy Process (AHP) to integrate multiple indicators, thereby enabling the identification of emerging themes. The findings indicate that, empirical validation on temperature and tactile-receptor research shows the index effectively traces topical evolution: from 1995–2003, the focus was on clinical aspects of cardiovascular and neurological disorders; during 1995–2009, attention shifted to neuroanatomical fundamentals; after 2015, studies on cold hypersensitivity gained significant attention, correlating with a Nobel Prize in 2021. In conclusion, the emerging topic identification method based on multilayer semantic networks proposed in this study can promptly detect emerging topics and their semantic relationships, thereby deepening our understanding of approaches for identifying emerging themes. |
| 17:19 | Variational Graph Auto-Encoders-based Technology Opportunity Analysis with Fusion Technological Map: Evidence in Chinese Digital Publishing PRESENTER: Jing Xu ABSTRACT. Existing Technology Opportunity Analysis (TOA) methods often rely solely on retrospective patent data, which lack foresight, while similarity-based and machine-learning-based approaches tend to overlook global structural information. To address these limitations, this study proposes an innovative TOA framework that integrates patent and literature data with advanced graph-based link prediction. We combine Generative Topographic Mapping (GTM) for technology opportunities identification by using both patent and literature data with Variational Graph Auto-Encoder (GVAE) for link prediction to evaluate technology opportunities. The framework was empirically tested using China’s digital publishing dataset (2005–2022).The dataset included 1,942 literature records and 4,708 patents. The GVAE outperformed 14 baselines, achieving 0.868 precision, 0.539 recall, 0.665 F1-score, and 0.806 AUC. It further evaluated 63 technological opportunities identified by GTM and predicted their practical development probability, which ranged from 0.892 to 0.666. Integrating literature with patents and applying GVAE-based link prediction enhances foresight and prediction accuracy, offering robust guidance for technology management. |
| 17:32 | Technology opportunity identification for idea generation by integrating generative topographic mapping and mining the gaps between science and technology ABSTRACT. Identifying technology opportunities for idea generation remains a central challenge in technology opportunity analysis (TOA). Existing patent mapping–based approaches can reveal patent vacuums, understood as data-level vacant regions identified through patent data modeling within patent landscapes, but often lack mechanisms to further transform such model outputs into information that is both interpretable and useful for innovation decision-making. To address this issue, this study proposes an idea-generation-oriented framework for technology opportunity identification. Starting from patent vacuums, the framework employs a multi-level transformation process to progressively convert model-level patent vacuum signals into technology gap points with explicit technological semantics, and further distills them into interpretable technology opportunity units. Methodologically, this study uses Generative Topographic Mapping (GTM) to identify patent vacuums in the patent system, and combines SAOX semantic structures enhanced by a technical keyword dictionary with hierarchical topic analysis to conduct semantic enrichment, scientific theme mapping, and technical problem extraction from gap information. In doing so, the framework provides the necessary scientific knowledge background for interpreting technology gap points and ultimately generates technology opportunity units that can assist experts in idea generation. Using scientific publications and patents in the field of 3D bioprinting as the empirical context, this study conducts an empirical analysis of the proposed technology opportunity identification framework. The results show that the approach can effectively transform dispersed and implicit gap information into structured technical descriptions and, without prespecifying concrete technological solutions, significantly enhance the expressive power of patent vacuum analysis in technology opportunity interpretation and idea generation. |
| 17:45 | Early discovering technology application opportunities in science and technology interactions by integrating weak signal analysis with SAO semantic bi-layer network PRESENTER: Yaochen Xin ABSTRACT. Discovering technology application opportunities (TAOs) is crucial for technological development and innovation. Existing studies predominantly discover new TAOs based on the existing technological knowledge units extracted from single patent data. However, these approaches overlook the intrinsic interactions between science and technology (S&T) knowledge and have a substantial time lag to early discover the TAOs. To address this gap, we propose a novel framework for discovering TAOs. First, a SAO semantic bi-layer network is constructed, where S (subject) nodes from papers and AO (action-object) nodes from patents, and inter-layer “AO-S” links are built through patent-to-paper mapping to characterize the “problem-solution” relationships of S&T. Second, an association similarity strategy is proposed to identify weakly associated subfields, which capture the science gaps and the TAOs from the perspective of network topology. Third, technical problem signals represented by weak-signal AO nodes are defined to capture the early features of TAOs. Fourth, the predicting “AO-S” links, which represent the candidate TAOs, are identified using a similarity-based link prediction method. Finally, based on the identified weakly associated subfields, the retained “AO-S” links are regarded as the final TAOs, which provides forward-looking support for technological innovation and applications. Empirical studies of two biomedical datasets validate the research framework. Overall, this study enriches the methodological foundation for early TAOs discovery and provides forward-looking support for technological innovation and application exploration. |
| 17:58 | Cross-Lingual Semantic Bridging: Optimization Paths for Chinese-Russian Patent Translation by Large Language Models PRESENTER: Pengxuan Zhang ABSTRACT. With the rapid development of large language models (LLMs), their application in professional text translation has attracted increasing attention. Patent translation plays a critical role in Sino-Russian intellectual property protection, yet remains challenging due to dense terminology, complex syntactic structures, and significant linguistic divergence between Chinese and Russian. This study investigates Chinese–Russian patent translation using Qwen3-8B and Meta-Llama-3-8B-Instruct as baseline models. A trilingual (Chinese–English–Russian) patent corpus is constructed, and LoRA-based instruction fine-tuning is applied. Two translation paths are compared: direct Chinese–Russian translation and English-mediated translation. Experimental results show that domain-specific fine-tuning significantly improves translation quality under both BLEU and chrF metrics. Moreover, introducing English as an intermediate language consistently outperforms direct translation, particularly in terminology consistency and syntactic standardization. The results demonstrate that English mediation provides an effective semantic bridge between Chinese and Russian in patent translation tasks. This study integrates intermediate-language translation with LLM fine-tuning and offers a practical paradigm for multilingual technical translation. |
| 16:40 | Scientific Collaborator Recommendation Integrating Academic Potential, Collaborative Potential, and Research Interests PRESENTER: Lin Zhu ABSTRACT. Recommending scientific collaborators with high academic potential, strong collaborative potential, and targeted relevance can facilitate knowledge sharing, integrate innovative resources, and enhance research quality and efficiency. Existing studies often neglect scholars' academic potential, which limits the full realization of researchers' capabilities during collaboration and impedes the development of high-quality scientific cooperation. To address these limitations, this study proposes the hybrid scientific collaborator recommendation method (HSCR) that integrating multiple dimensions—academic potential, collaborative potential, and research interests—enabling the simultaneous measurement of recommended collaborators' academic potential and the prediction of collaborative potential while ensuring targeted recommendations. An empirical study was conducted in the hydrogen fuel cell domain to validate the proposed hybrid scientific collaborator recommendation method. The results demonstrate that the HSCR method effectively identifies potential collaborators characterized by high academic potential, strong collaborative compatibility, and targeted relevance, thereby better accommodating personalized needs in scientific collaborator recommendation. |
| 16:53 | Are Large Language Models Reliable Reviewer Assistants? A Three-Dimensional Evaluation on Real Conference Submissions PRESENTER: Qian Tang ABSTRACT. Large language models (LLMs) are increasingly used as reviewer assistants, yet their reliability in real peer-review settings remains insufficiently characterized. Using publicly available OpenReview records from ICLR 2017–2019, we construct a stratified benchmark (N=280 submissions) of real submissions and review artifacts for evaluating LLM-assisted reviewing. We define a protocolized task with fixed inputs (Title/Abstract/Keywords) to reflect a triage setting and standardized outputs (recommendation, overall score, self-reported confidence, and rationale), enabling controlled cross-model comparisons. We introduce a three-dimensional evaluation that separates validity, reliability, and robustness, measuring alignment with human outcomes and ranking signals, stability across repeated runs, and drift under information-preserving perturbations. LLMs provide informative decision and ranking signals, but stability is uneven: instability concentrates near the decision boundary, with elevated flip risk, and confidence is only partially predictive of unstable cases. Variability arises not only from run-to-run stochasticity but also from subtle, information-preserving changes in input presentation and instruction framing. We therefore position LLM outputs as calibratable process signals for triage and targeted human review rather than substitutes for final acceptance decisions and discuss implications for human–AI reviewing workflows that balance efficiency, quality, and accountability. |
| 17:06 | Generative AI Skills as Human Capital: An Empirical Study on Its Interplay with Scientific Collaboration Networks PRESENTER: Houda Adan ABSTRACT. Generative artificial intelligence (GenAI) has been increasingly penetrating the scientific community, emerging as a valuable asset for a researcher’s individual human capital. From early pioneers to late adopters, researchers adopted it at different stages of scientific research. Drawing on the Theory of Scientific and Technical Human Capital (STHC), we argue that a researcher’s social capital influences the adoption of GenAI, which in turn reshapes the researcher’s subsequent collaboration. To test this relationship, we propose an LLM-assisted approach to identify a cohort of domain scientists who first adopted GenAI in their research between 2021 and 2023, and we then track their research collaborations before and after adoption. We measure generative AI skills by first-use and re-use, while social capital is measured by multi-dimensional collaboration metrics of collaborative network centralities, average citation per collaboration, institutional diversity and number of prior co-authors who already published an AI paper. First, we employ panel logistic regression to test the effect of social capital on the GenAI adoption. Second, we estimate a dynamic treatment effect DiD model and two-way fixed effects to test the effect of GenAI first-use and re-use respectively on researchers’ social capital development. Data on researchers and their publications are drawn from OpenAlex, and we focus on the fields of molecular biology and mechanical engineering to capture domains with differing levels of modularity. The results of this empirical study are expected to first prove the interplay between researchers’ human capital and social capital in the context of GenAI adoption. Thereby offering actionable insights for researchers’ career strategies and institutional policies aimed at fostering AI4Science. |
| 17:19 | A Dynamic Heterogeneous Graph Learning Framework for Scientific Collaboration Recommendation PRESENTER: Xiaoyu Liu ABSTRACT. Effective scientific collaborator recommendation is crucial for fostering academic partnerships, accelerating knowledge dissemination, and promoting interdisciplinary research.This study tackles the core challenge of capturing dynamic evolution patterns in scholarly collaboration networks. Scholarly collaboration networks are inherently heterogeneous, comprising multiple types of nodes and edges, while their topological structures evolve dynamically over time due to shifting research interests, emerging trends, and evolving collaboration patterns. Existing methods primarily focus on static homogeneous networks, which struggle to address the temporal evolution and multi-source heterogeneity inherent in dynamic heterogeneous networks, particularly failing to adapt to the dynamic nature of academic ecosystems where researchers' expertise evolves, new collaborations form, and existing partnerships strengthen or dissolve over time. To resolve this issue, we propose DynHGN, a novel dynamic heterogeneous network embedding model. DynHGN integrates hierarchical attention mechanisms to learn heterogeneous information and combines recurrent neural networks (RNNs) with temporal attention mechanisms to capture dynamic evolution patterns in collaboration networks. This dynamic modeling capability addresses the critical need for time-sensitive recommendations that reflect researchers' evolving expertise and current collaboration trends, thereby significantly improving recommendation precision. Experimental results demonstrate that DynHGN achieves improvements of 2.17%, 2.99%, and 3.2% in F1, MRR, and nDCG metrics, respectively, compared to the best baseline methods. Our model provides a novel framework for dynamic academic network analysis and can be extended to applications such as social networks and knowledge graphs. |
| 17:32 | Early Identifying of Firm-year Technological Breakthroughs in LLM Firms: A Learning-Enhanced Knowledge–Technology–Industry Coupling Framework PRESENTER: Ding Ma ABSTRACT. Early identification of technological breakthroughs is crucial for monitoring future industries, screening innovation actors, and supporting resource-allocation decisions. Existing studies mainly identify breakthrough inventions at the patent level or rely on one-dimensional proxies such as highly cited patents and breakthrough patent counts, which are insufficient for determining whether a firm has entered a substantive breakthrough state in a given year. Focusing on firms in the industry of large language model, this study formulates breakthrough identification as a firm-year event prediction problem and proposes a learning-enhanced framework grounded in knowledge–technology–industry coupling. We integrate patent mining with multiple industrialization-related signals to construct firm-year representations of knowledge novelty and influence, structural technological salience, industry realization, and cross-layer coupling. Based on these multi-source features, the framework first generates high-confidence breakthrough candidates and then applies a learning-based identification model to improve event discrimination, candidate ranking, and early detection of breakthrough timing. Empirically, we draw on a multi-source technology-mining dataset consisting of 9,828 raw patent-related records, 7,222 mapped patents, 5,722 software registrations, 4,325 bidding records, 1,680 certification records, and 949 honor or platform-recognition records, which are consolidated into a 712-observation firm-year panel over 2014–2025. Our model outperforms rule-only, patent-count, citation-based, and single-layer baselines in firm-year breakthrough-event prediction, yielding better AUC, precision, and recall, as well as stronger one-year-ahead detection of breakthrough timing. By shifting the analytical focus from breakthrough patents to breakthrough years and embedding multi-source signals into a learning-enhanced identification framework, this study provides an operational tool for detecting potential breakthrough firms and identifying the timing of their breakthrough emergence. |
| 17:45 | Mapping the Science of Science: Topic Evolution and Knowledge Flows across Six Subfields PRESENTER: Yufan Xiao ABSTRACT. The science of science has emerged as a major interdisciplinary field for analyzing the structure, dynamics, and social functions of scientific research. However, its intellectual organization and patterns of knowledge integration across subfields remain insufficiently systematized. This study provides a quantitative and comparative mapping of the major subfields of the science of science based on journal-level publication and citation data. Using articles collected from representative international journals, we apply bibliometric and scientometric techniques combined with knowledge graph visualization to analyze research topics, contributing actors, and citation linkages. Topic co-occurrence networks and inter-journal citation networks are constructed to identify the thematic structure of each subfield and to trace knowledge flows within and between them. The results show that all subfields exhibit a heterogeneous and evolving topic structure, in which established research themes coexist with rapidly emerging ones and form distinct evolutionary trajectories. Moreover, knowledge flows are highly concentrated among conceptually proximate subfields, while cross-domain exchanges remain comparatively limited, resulting in a modular but interconnected knowledge system. These findings provide a systematic empirical basis for understanding the interdisciplinary configuration and developmental dynamics of the science of science. |
| 17:58 | Research on Relationship Recognition and Prediction Based on the Integration of Association Rules and Graph Neural Networks PRESENTER: Ziyan Niu ABSTRACT. Building models to improve the predictive effectiveness of technology convergence and providing innovative perspectives and methods for technology convergence research. First, the Apriori algorithm is used to model co-occurrence relationships, effectively identifying diverse and directed technology fusion patterns and evolution paths. Second, link prediction models based on five graph neural network algorithms are constructed to extract node and topological features of the technology fusion network, enabling the prediction of fusion relationships. Using artificial intelligence technologies as empirical examples, the link prediction model based on GraphSAGE-GCN achieved an AUC of 0.84, and the consistency between the prediction results and actual data reached 0.7, thereby identifying emerging, strengthening, and declining technology links. Data and feature dimensions need improvement, high-order association mining is insufficient, and dynamic modeling needs further refinement. The GraphSAGE-GCN algorithm achieved the best overall performance in this link prediction task, demonstrating its advantage in uncovering high-value potential fusion opportunities. |
Algorithmic Decision-making in Digital Governance:A Framework of Socio-Technical–Governance Co-Adaptation PRESENTER: Meishan Yang ABSTRACT. At the macro level, the tension between the demands of public governance in post-industrial societies and the logic of technological rationality has become increasingly pronounced. This tension manifests in three structural contradictions: the mismatch between escalating societal complexity and the lagging linear decision-making paradigm; the supply–demand conflict arising from the expanding scale of public services and the growing scarcity of governance-related technical resources; and the compatibility dilemma between heterogeneous discretionary technologies and the homogeneity inherent in traditional bureaucratic systems. Collectively, these dynamics signal that modern governance is approaching a critical threshold of “decision-making crisis.” To address this crisis, algorithmic decision-making (ADM)systems—organized through a closed-loop coupling of data, models, and computational power—have emerged. At the micro level, the deluge of big data serves as the foundational fuel of algorithmic systems and releases administrative value; large-scale models function as the central arena for distributed inference, contextualized discretion, and the enactment of technological rationality; and ultra-computing capacity operates as the driving engine that powers big-data enablement and large-model operations, thereby breaking through performance bottlenecks. From a meso-level perspective, Algorithmic decision-making systems constitute the pivotal technological support underpinning the transformation and upgrading of public governance in the digital society. Technological rationality provides the basis for governance alignment; technological embedding enables the empowerment of public governance in the digital era; and technological leaps drive a qualitative shift toward data-intensive discretion in public administration. |
Extending Technological Main Paths Combining SAO Semantic Analysis and Function-oriented Search PRESENTER: Xuan Wu ABSTRACT. To address the issue of unidimensional and incomplete technological main paths due to time lags in patent citations, this study proposes an integrated framework for extending main paths for technological development and identifies multi-category technology innovation opportunities. The multi-dimensional technological main paths are initially extracted combining the community detection and SPC algorithms. Then the technical-efficacy matrix based on SAO semantic analysis is mapped to acquire hot technical efficacies of each main path. Finally, we apply FOS to retrieve the latest scientific papers from similar technical domains with the hot efficacies, and then use technological novelty indicators and similarity indicators to screen and identify the technology innovation categories, thereby extending the original main paths. In summary, five main paths and three types of innovation opportunities in the field of ASSLIBs are identified. |
Research Trends in SDGs Interlinkages and Network Construction PRESENTER: Dan Chen ABSTRACT. Against the backdrop of a global sustainability transition, synergies and trade-offs among the Sustainable Development Goals (SDGs) have become a central scientific problem for policy integration and systemic governance. Focusing on the construction of an SDG–SDG interlinkage network, this study integrates bibliometric analysis and rule-based information extraction. Specifically, based on the abstracts of 3,475 publications retrieved from the Web of Science (WoS) Core Collection and the full texts of 195 sampled papers, we (i) use tools such as VOSviewer to analyze countries, journals, and keyword co-occurrence patterns to depict the knowledge structure and the evolution of research hotspots; and (ii) extract sentence-level “SDG–relation–SDG” statements to construct a directed, weighted SDG–SDG network, quantify network density, reciprocity and centrality, and conduct community detection. The results show that synergy accounts for 56.1% of extracted relations, whereas trade-offs account for 43.9%. The network exhibits a hub-dominated structure led by a small set of goals, and four cross-module coupled communities can be identified. This study provides structural evidence for identifying synergy corridors and trade-off hotspots, and for optimizing multi-objective policy portfolios and risk governance. |
An AI-Assisted Framework for Identifying Core Theories in Interdisciplinary Research: A Case Study of Information Resources Management PRESENTER: Mengqiu Zhao ABSTRACT. Interdisciplinary research often faces challenges due to fragmented theoretical systems, hindering systematic knowledge integration and development. Using Information Resources Management as a case study, this research proposes an AI assisted framework to identify, classify, and visually represent core theories. The framework employs a five stage hybrid paradigm that combines theoretical foundation building, automated literature processing, multidimensional classification, expert validation, and knowledge graph construction. By integrating natural language processing, large language models, and human expert input, the study establishes a scalable theoretical maturity assessment system and uncovers relational structures among theories. This approach improves the objectivity, efficiency, and systematization of theory extraction in interdisciplinary fields. The framework is designed to be transferable and offers a methodological foundation for theory driven knowledge discovery. Future work will focus on refining the classification system through expert feedback and extending the methodology to domains such as innovation management. |
Extending Knowledge Organization toward Reasoning Organization: The Layered Framework for Accountable Reasoning in AI Systems ABSTRACT. AI systems are increasingly deployed in high-risk decision-making contexts, where the credibility of AI-assisted judgments depends not only on outcome correctness but on whether reasoning processes can be examined, audited, and attributed, revealing a structural mismatch between knowledge-oriented organizational frameworks and procedural, temporally unfolding reasoning. Accordingly, this paper introduces Reasoning Organization (RO) as an organizational and governance-level interface that treats reasoning processes as explicit objects of organization, specifying how AI-assisted reasoning should be structured, constrained, recorded, rather than describing internal model computations or algorithmic implementations. The RO framework operates through two cross-cutting dimensions—Semantic Alignment, which ensures consistent interpretation of concepts, evidence, and states throughout reasoning, and Human–AI Collaboration, which allocates responsibilities across the reasoning workflow. At its core lies the Reasoning Object Stream O(t), which represents the judgment carrier that is progressively reconstructed over the course of reasoning. Reasoning Organization is realized through a layered architecture in which the Rule Layer defines admissible state transitions through explicit rule objects that constrain the Execution Layer, the Execution Layer reconstructs O(t) and produces execution traces, and the Accountability Layer audits O(t) and its traces by reorganizing execution records and feeding governance back to the Rule Layer through rule revision. From this layered architecture, three structural properties emerge as necessary consequences: inspectability, whereby rules, states, and transitions are explicitly identifiable and examinable; verifiability, whereby execution traces enable reasoning paths to be replayed and checked for consistency; and governability, whereby reasoning outcomes can be audited and corrected through rule revision and responsibility reallocation. These properties form a logical dependency chain from structured to executable and ultimately accountable reasoning. As a Research-in-Progress study, this paper contributes a conceptual and organizational framework for reasoning governance, while domain-specific instantiation and system-level implementation are reserved for future research. |
How Do User Discourses Shape Brand Community? PRESENTER: Zhipeng Chen ABSTRACT. This study conceptualizes brand communities as information systems composed of heterogeneous user interactions. Using large-scale Reddit data and a theory-driven LLM framework, community discourse is structured into brand competitiveness assessment and brand value co-creation processes. The results reveal distinct emotional dynamics across discourse types, highlighting how information structures shape collective sentiment in brand communities. |
Patent Analytics for Mapping SDGs Interlinkages and Identifying Critical Technologies PRESENTER: Qingyun Liao ABSTRACT. Purpose: This study aims to systematically examine the role of technological innovation in advancing the United Nations Sustainable Development Goals (SDGs), identifying both general-purpose and goal-specific technologies that drive progress. Design/methodology/approach: Drawing on patent data from the PatentSight database (2015–2024), we construct an SDG–technology mapping matrix covering 100 technology categories across 13 SDGs. We develop and apply the Technology Contribution Index (TCI), which integrates coverage, patent volume, diversity, and contribution intensity. Additionally, cosine similarity is applied to capture interlinkages among SDGs. Findings: The results reveal marked heterogeneity in technological engagement across the SDGs. SDG 09 (Industry, Innovation and Infrastructure), SDG 07 (Affordable and Clean Energy), and SDG 13 (Climate Action) dominate in portfolio size and form a tightly interconnected cluster, whereas socially oriented goals such as SDG 05 (Gender Equality) and SDG 01 (No Poverty) remain weakly supported by patents. Large-scale, general-purpose technologies (e.g., Advanced Manufacturing, Blockchain, Internet of Things) provide broad systemic support across multiple SDGs, while specialized technologies (e.g., Clean Cooking, Sex-Disaggregated Data Management) play indispensable roles in individual goals. These findings highlight the coexistence of systemic platforms and niche drivers in the global innovation ecosystem. Research limitations:Patent data may underrepresent non-technological contributions to socially oriented SDGs (e.g., equity, education, institutional reform), which rely more heavily on policy, cultural, or behavioral interventions. Practical implications: The TCI framework offers a tool for policymakers to prioritize investment in enabling technologies with broad spillover effects, while also promoting targeted support for niche innovations to address technology gaps in under-supported SDGs. Originality/value: By integrating patent analytics with the SDGs framework, this study introduces a novel quantitative approach—the TCI—for systematically identifying key enabling and goal-specific technologies. The findings enrich the literature on sustainable development and innovation policy, and provide actionable insights for aligning science, technology, and innovation (STI) strategies with the 2030 Agenda. |
Who to Collaborate in “AI for Science”? How Authors’ Knowledge Composition Shapes Network dynamics PRESENTER: Yangyang Jia ABSTRACT. The “AI for Science” (AI4Science) revolution driven by AI-empowered scientific research is transforming modes of knowledge production and innovation. This paper empirically examines how the knowledge composition of scientists influences the formation and evolution of collaboration networks in the context of AI-empowered scientific research. We first proposed LLM-based approach to identify AI4Science publications. Using AI4Science publications in the field of materials chemistry, we construct collaboration networks with researchers as nodes and co-authorship ties as edges. The Stochastic Actor-Oriented Models (SAOMs) are applied to model the network dynamics across different periods, enabling a longitudinal analysis of collaboration patterns. The findings indicate that domain experience, AI experience, and AI4Science experience play distinct roles, where scientists with more AI or AI4Science experience are more inclined to establish AI4Science partnerships, while those with more domain experience are the opposite. |
Dual Transitions under De-Globalization: Evolutionary Pathways of Emerging Economies' Future Industrial Innovation Ecosystems PRESENTER: Manting Luo ABSTRACT. In the context of de-globalization, emerging economies face the dual challenge of sustaining leadership in advantaged industries while enabling breakthroughs in disadvantaged ones. Drawing on innovation ecosystem theory, this paper proposes a "dual transition" framework to explain divergent industrial pathways under changing global conditions. Using multi-source data and comparative case analysis of the new energy vehicle and biopharmaceutical industries, the study identifies two distinct innovation ecosystem trajectories. Advantaged industries undergo an "outward transition" driven by technological upgrading and international expansion, whereas disadvantaged industries follow an "inward transition" centered on import substitution and indigenous innovation. Based on these findings, the paper develops an "outward expansion–inward breakthrough" framework, highlighting the need to balance global competitiveness with indigenous innovation to support industrial upgrading in emerging economies. |
Reconfiguring Technology Governance under Deglobalization and Techno-nationalism: Evidence from the Global Semiconductor Value Chain PRESENTER: Siyu Pan ABSTRACT. Under conditions of deglobalization and techno-nationalism, critical technologies are increasingly governed through security-oriented policy interventions rather than efficiency-driven market coordination. Focusing on the global semiconductor value chain, this study examines how such shifts translate into concrete governance outcomes. The paper develops an analytical framework linking structural drivers, policy instruments, operative mechanisms, and governance outcomes, and applies comparative policy analysis and process tracing to three cases: overseas fabrication investment, export controls on advanced manufacturing equipment, and competition in advanced chips. The analysis suggests that security-oriented instruments—such as export controls, industrial subsidies, investment screening, and alliance-based coordination—reconfigure value chain dynamics by reshaping technological diffusion, production location, and rule alignment. Rather than producing uniform decoupling, these mechanisms generate differentiated and fragmented forms of technology governance, increasing regulatory complexity for multinational firms. The study contributes to debates on global technology governance under geopolitical uncertainty by unpacking the mechanisms through which techno-nationalist policies reshape value chain governance. |
A Hybrid Topic Modeling Framework Integrating Graph-Based Clustering and Large Language Models PRESENTER: Tao Zhang ABSTRACT. Existing topic modeling approaches face two principal limitations: clustering-based methods (e.g., BERTopic) rely heavily on dimensionality reduction, resulting in inevitable information loss, while methods based on large language models (LLMs) (e.g., TopicGPT) often fail to capture the intrinsic structure of document collections. To address these issues, this paper introduces a novel hybrid topic modeling framework that creatively integrates graph-based clustering and LLMs, aiming to ensure both the robustness of topic structure discovery and the integrity of topic semantics. Its core mechanism lies in abandoning the traditional dimensionality reduction steps and directly conducting computations in the original high-dimensional semantic space. Specifically, the proposed framework first employs an embedding model to obtain document vectors and constructs a K-nearest neighbor graph (K-NNG). The Leiden algorithm is then applied for community detection, forming initial document clusters. To enhance cluster purity, a semantic similarity-based document filtering mechanism is introduced. Finally, LLMs are utilized to automatically transform semantically coherent document clusters into interpretable topic labels and detailed descriptions. Experiments conducted on the Bills and Wiki datasets demonstrate that the proposed framework outperforms mainstream baseline methods such as BERTopic in key evaluation metrics including topic coherence and alignment. Human evaluations further confirm its superior interpretability. Notably, our analysis reveals that the choice of embedding model has limited impact on final topic quality, offering practical guidance for model selection in resource-constrained scenarios. Overall, this research contributes an innovative hybrid framework that effectively combines graph-based clustering and LLMs for topic modeling, experimentally validates its superiority, and provides a novel and practical solution to the field. |
A Study on the Convergence Mechanism Between Artificial Intelligence and Carbon Neutrality Technologies: A Patent Semantic Network Approach PRESENTER: Yanbing Li ABSTRACT. Understanding how artificial intelligence (AI) enables carbon neutrality (CN) technologies is critical for advancing low-carbon innovation. Existing studies on technology convergence largely rely on citation-based or co-occurrence-based relationships, and often treat convergence as a symmetric process, limiting their ability to capture early-stage and enabling interactions. This study proposes a semantic-based framework to examine AI–carbon neutrality convergence using patent data. An AI-agent approach is employed to match patents with relevant technology keywords. A heterogeneous patent-keyword graph is constructed, and a graph convolutional network (GCN) is used to jointly learn semantic representations of patents and technology keywords. Based on the learned vectors, we develop multidimensional convergence indicators capturing directionality, integration depth, temporal dynamics, and quality. The proposed framework provides a flexible tool for identifying AI-enabled carbon-neutral innovations and contributes to technology convergence research and climate-related policy analysis. |
Demand Modeling–Driven Technology Supply–Demand Matching: A Case Study in the Biopharmaceutical Domain PRESENTER: Jiwen Liang ABSTRACT. Accurate matching between enterprise technology demands and academic supplies is crucial for technology transfer. Traditional keyword methods struggle with fine-grained semantic relations. This study presents a demand-driven matching framework combining LLM-based semantic structuring and task-adaptive embeddings. Demands are decomposed into structured components, and similarity is computed via lexical and embedding models. Contrastive learning and multi-model ranking enhance retrieval. Experiments in the biopharmaceutical domain show significant gains in Precision, Recall, and F1, confirming the approach’s effectiveness. |
How Big Scientific Facilities contribute to the SDG Science PRESENTER: Xinyao Wang ABSTRACT. Big science facilities are core infrastructures for scientific advancement, playing a vital role in discovering novel solutions to achieve Sustainable Development Goals (SDG). We constructed a dedicated dataset to systematically compared SDGs contributions made by the publications supported by big scientific facilities, the publication profiles of different types of facilities in both SDG and non-SDG domains, as well as the distribution and topical landscape of big scientific facilities in SDG-related research, thereby revealing the association between large scientific facilities and SDG research from a macro perspective. Results show a rising focus on big scientific facilities in SDG-related research, which is oriented towards technological applications rather than social systems, and synchrotron light sources (SLSs) stand out as the most frequently discussed facility type. This study will expand the research landscape of big scientific facilities and SDGs, revealing the macro-level distribution status of big scientific facilities in SDG research. |
A Cross-Modal Tech Mining Framework for Forecasting Innovation: Evidence from Additive Manufacturing PRESENTER: Jing Bian ABSTRACT. In this paper, we propose a cross-modal tech mining framework to integrate patent schematics and open-source code, aiming to uncover innovation patterns missed by text-only analysis. Traditional tech mining relies on patent and literature text, overlooking rich information in patent drawings and in code repositories. By embedding patent figures and code documentation together via vision language models (CLIP), we bridge the “implementation gap” between design (patents) and execution (code) in additive manufacturing. A pilot case using shape-memory-alloy and self-healing-polymer technologies demonstrates hidden links and innovation trajectories that text-based methods cannot reveal. |
Research on Patent Technology Topic Evolution Identification Based on VSM and D-S Evidence Theory PRESENTER: You Li ABSTRACT. The analysis of the evolution of patent technology is a crucial method for identifying technological trajectories and forecasting frontier directions. However, extant approaches are beset by significant challenges in dynamic environments, particularly with regard to the robustness of their handling of conflicting evidence from multi-source data and their ability to capture the temporal dynamics of relational weights. In order to address these limitations, this study proposes a novel patent technology topic evolution identification model integrating the Vector Space Model (VSM) and Dempster-Shafer (D-S) evidence theory. The model employs VSM for feature quantification and similarity calculation within small-sample time windows, generating confusion matrices to dynamically derive fusion weights for D-S evidence theory. This design enhances robustness in high-conflict scenarios and reveals the differential integration of multivariate relationships—textual co-occurrence (MB), citation coupling (MC), and applicant coupling (MP)—across various technological stages. An empirical study in the graphene sensing technology domain validates the framework. The findings indicate a substantial degree of complementarity among the three heterogeneous networks. Compared to the static entropy weight method, the proposed dynamic weighting strategy effectively overcomes the issue of early-stage data sparsity caused by independent time-window segmentation, particularly by restoring latent cross-temporal MP associations. The constructed temporal topic networks and subsequent CONCOR clustering successfully delineate the field's evolution from fundamental material analysis towards diversified, integrated, and application-oriented research themes. |
From Disruption to Reconstruction: Reconfiguring the Innovation Ecosystem of Future Industries under De-globalization: A Case Study of Quantum Technology PRESENTER: Bowen Tian ABSTRACT. Under the combined pressures of de-globalization, techno-nationalism, and an increasingly volatile external environment, future industries can no longer rely on single organizations to integrate critical innovation resources. Drawing on innovation ecosystem theory, this paper argues that the coordinated development of a "dual-cycle framework" constitutes the core mechanism for reconstructing innovation ecosystems in future industries. Taking quantum technology as a case study, this research employs multi-source data and comparative case analysis to examine how technological barriers imposed by major economies, particularly the United States, have fragmented traditional global value chains. It further investigates China's practices of domestically driven independent innovation under the internal cycle, alongside regionally oriented external-cycle collaboration with the European Union and other partners. The findings indicate that ecosystem adaptability is fundamental to the functioning of the dual-cycle framework, while proactive government intervention in collaborative technological R&D significantly enhances systemic resilience. This study provides theoretical insights and policy-relevant implications for innovation ecosystem reconstruction under de-globalization. |
Divergent Paths in Corporate Basic Research: A Comparative Case Study of State Grid and Huawei PRESENTER: Liyang Liu ABSTRACT. This comparative case study analyzes the divergent models of corporate basic research between State Grid (a state-owned enterprise) and Huawei (a private firm). Driven by national energy security mandates, State Grid employs a mission-driven, hierarchical model characterized by centralized governance and high-volume, engineering-oriented outputs focused on international standards and systemic risk mitigation. In contrast, Huawei adopts a capability-driven, decentralized model to navigate global technological uncertainty, utilizing its "2012 Laboratories" to foster exploratory research that yields high-impact publications and foundational patents. The findings suggest that ownership and industry contexts fundamentally shape research logic, indicating that policy evaluations should be tailored to these distinct organizational structures—prioritizing system impact for SOEs and incentivizing high-risk exploration for private firms. |
Tracing Conceptual Stabilization in Policy Discourse: A Triadic Closure Approach PRESENTER: Yan Liu ABSTRACT. Policy-making relies on conceptual structures, yet the evolution and stabilization of conceptual relationships within policy discourse remains a central, under-measured problem for understanding institutional change. The relational stabilization of policy concepts within strategic discourse is examined through a computational analysis of China's Five-Year Plans (1953–present). Specifically, a hybrid methodology is developed by integrating large language models, network analysis, and statistical testing to trace the evolution of indirect conceptual associations into structured triadic patterns over time. Rather than equating conceptual prominence with institutionalization, the analytical focus is deliberately placed on the dynamic formation and closure of conceptual triads across sentence, paragraph, and chapter levels. Methodologically, triadic closure is operationalized as a measurable indicator of relational stabilization, thereby reflecting the gradual embedding of concepts into background ideational structures. Subsequently, significant variation in closure rates is anticipated across policy domains and textual granularities, with higher stability expected in established policy areas and at broader textual levels. Furthermore, a theoretical contribution is made to discursive institution-alism by computationally operationalizing relational mechanisms, while simultaneously demonstrating how LLM-enabled frameworks can bridge theoretical claims and empirical text analysis. Ultimately, this research offers a replicable analytical pipeline for examining conceptual evolution in large-scale policy corpora, thereby providing both a methodological framework and substantive insights into the processes of discursive institutionalization. |