Tutorial: Neural Network Design and Large Language Models (NASL2M)
ABSTRACT. Neural architecture Search (NAS) has proven to be essential for the generation of neural network architectures to solve image classification, segmentation and language translation problems. With the rapid development of the area of large language models (LLMs) a synergistic relationship has developed between NAS and LLMs. NAS has been effective in developing efficient architectures for easier deployment of LLMs while LLMs have been used for NAS. This tutorial examines this synergistic relationship.
The tutorial firstly gives an overview of NAS including the purpose of NAS, approaches used, performance evaluation, including performance estimation using proxies, surrogates and predictors and efficient NAS (ENAS) and NAS benchmarks. The tutorial will then provide an overview of LLMs including descriptions of the different LLMs and related challenges. The use of NAS for the design of LLMs including LLM distillation, LLM compression, hardware-efficient LLMs and fair LLMs will be presented. The tutorial will then look at how LLMs can be used to improve NAS. The topics that will be examined include architecture generation, parameter tuning, knowledge transfer, performance prediction and LLM hybrids.
ABSTRACT. This tutorial provides a comprehensive overview of the emerging fifth paradigm of scientific discovery, driven by artificial intelligence. We will cover the landscape from foundational AI methodologies—including geometric deep learning, self-supervised learning, and generative models—to their application across the scientific workflow. The tutorial will detail how AI is used to generate hypotheses, design and steer experiments, and interpret vast datasets. State-of-the-art breakthroughs will be showcased through case studies in materials science, drug discovery, and climate science. We will also address grand challenges such as data quality, model generalizability, and causality. Attendees will gain a principled understanding of the opportunities and pitfalls of AI for Science.
ABSTRACT. Adaptive Machine Learning (AML) is a hands-on tutorial that introduces real-time, incremental learning techniques for streaming and continually evolving data. Using CapyMOA, an open-source Python library, participants will explore practical tools and algorithms that adapt to changing data distributions, enabling robust, low-latency learning in dynamic environments. Ideal for researchers and practitioners aiming to build scalable, adaptive solutions.
[Invited talk] Retrieval–Reasoning Enhanced Generation for Radiology Reports: Experience from the NTCIR-18 Hidden-RAD Task
ABSTRACT. This talk presents the experience of our team in the NTCIR-18 Hidden-RAD Task, which focused on generating causality-based diagnostic inferences from radiology reports. In Subtask 1, we developed a cost-efficient API-driven inference pipeline that integrates few-shot in-context learning, retrieval-enhanced prompting, and strict candidate selection with an evaluation checklist. By dynamically enriching prompts with retrieved similar cases, this approach achieved 1st place in the official evaluation. In Subtask 2, we introduced PRISMA-Guided Causal Explanation, a structured prompt-based reasoning method that improved interpretability and secured 2nd place. We also explored fine-tuning with domain-specific prompting, which, while not included in the final ranking, demonstrated promise for improving adaptability and interpretability.
Building on these results, the talk will further explore advances toward reasoning-enhanced methods and test-time adaptation, including dynamic retrieval strategies, hybrid symbolic–neural reasoning frameworks, and lightweight inference-time tuning. These approaches aim to strengthen explainable AI in radiology, bridging the gap between automated diagnostic inference and human expert decision-making.