ABSTRACT. To achieve ultra-low-power (uW level), low latency (uS-mS level), small footprint (< few mm), and private inference at the edge of the edge for time-critical automotive systems, sensor manufacturers now integrate custom processing cores directly within the sensor die. In this talk, we discuss the architecture and software tools for sensors with built-in machine learning core, finite state machine, and intelligent sensor processing unit, useful in secure automotive, industrial, medical, and consumer applications. We introduce about ST AIoT Craft, a secure, containerized sensor-gateway-cloud framework to realize AIoT system containing millions of these sensors without writing any code. The platform enables secure sensor data management, data processing, data visualization, automatic on-sensor tiny machine learning, deployment, and life cycle management.
ABSTRACT. Adaptive Cruise Control (ACC) is a widely used driver assistance technology for maintaining desired speed and safe distance to the leading vehicle. In this talk, I will introduce our recent paper that evaluates the security of the deep neural network (DNN) based ACC systems under runtime stealthy perception attacks that strategically inject perturbations into camera data to cause forward collisions. We present a context-aware strategy for the selection of the most critical times for triggering the attacks and a novel optimization-based method for the adaptive generation of image perturbations at runtime. We evaluate the effectiveness of the proposed attack using an actual vehicle, a publicly available driving dataset, and a realistic simulation platform with the control software from a production ACC system, a physical-world driving simulator, and interventions by the human driver and safety features such as Advanced Emergency Braking System (AEBS). Experimental results show that the proposed attack achieves 142.9 times higher success rate in causing hazards and 82.6% higher evasion rate than baselines, while being stealthy and robust to real-world factors and dynamic changes in the environment. This study highlights the role of human drivers and basic safety mechanisms in preventing attacks.
Towards Understanding User Privacy Concerns of Internet of Things Sensor Data
ABSTRACT. Despite the wide adoption of Internet of Things (IoT) devices in people’s lives, their privacy implications remain unclear to many users. Privacy policies are used as the major mechanism to deliver this information, and tools are developed to assist users to interpret these policies, there is still a gap between users’ perceived privacy and the actual privacy risks they face. Especially for various indirect sensing modalities, where users’ comprehension and concerns about their privacy are unknown. We present PrivacyVis, a novel visualization tool that provide an informative and expressive visual representation of the sensors, data processing workflows, and associated privacy risks of IoT devices. Designed to be user-friendly, the tool aims to enhance users’ understanding and empower them to make informed decisions about their privacy. PrivacyVis allows us to conduct efficient user surveys to understand user concerns and perceived privacy for IoT devices.
Intermittent Power, Continuous Protection: Security and Privacy for Batteryless Devices in IoT
ABSTRACT. Batteryless devices are gaining widespread adoption because they eliminate the need for batteries through energy harvesting. This not only reduces electronic waste but also facilitates sensing and monitoring in physically inaccessible or challenging environments, such as implantable devices, smart agriculture, and forests. Consequently, batteryless devices are increasingly being integrated into Internet of Things (IoT) environments, such as smart homes, healthcare facilities, and agricultural systems, as well as cyber-physical systems (CPS), such as industrial plants and wearable technologies. In these contexts, batteryless devices often handle privacy-sensitive information and must interact seamlessly and securely with both other batteryless devices and traditional battery-powered devices to share data and perform tasks.
In this paper, we first identify the unique security and privacy challenges associated with batteryless devices. We then analyze the existing security and privacy solutions within the IoT and CPS domains, highlighting their limitations when applied to batteryless devices. Lastly, we propose new research directions at the intersection of sensing, IoT, security, and privacy to address the operational requirements of batteryless devices. This paper is an important step forward in protecting the security and privacy of batteryless devices as well as their integration into IoT systems and CPS.
5G Connectivity roadmap and challenges for edge devices to cloud through 5G/6G channels
ABSTRACT. The future of computing will be based on connectivity between edge devices and the cloud-based AI data centers and applications such as AI agents. With Moore’s Law reaching its limits, the semiconductor industry is moving towards Gate-All-Around (GAA) devices and 2.5D/3D heterogeneous integration. Key challenges include overcoming propagation losses, signal blockage and energy management, especially in RF/mm-wave systems. The IEEE Heterogeneous Integration Roadmap (HIR) highlights future trends driven by AI/ML for silicon and 5G/6G for III-V materials. Challenges in mm-wave technologies are addressed partly by phased array systems. Tight integration of antennas, RF transceivers, and processors is key for future AI-enabled applications. Is millimeter-wave 5G and sub-THz 6G required? This talk will cover microelectronics trends, the HIR roadmap, market drivers, and the impact of IMT-2030 on the microwave field.
5G security and AI/ML - Opportunities and Challenges
ABSTRACT. Next generation cellular networks such as 5G and 6G promise to support emerging applications such as enhanced mobile broadband, mission critical applications for the first responder, remote surgery, and industrial IOT among others. While Network Function Virtualization and Software Defined Networking open up the door for programmable networks and rapid service creation, these also offer both security opportunities, and introduces additional challenges and complexities. The talk focuses on various security challenges and opportunities introduced by 5G enablers such as Hypervisor, Virtual Network Functions (VNFs), SDN controller, orchestrator, network slicing, cloud RAN, edge cloud, and virtual security function. This talk introduces threat taxonomy for 5G security from an end-to-end system perspective including, interfaces, protocols, potential threats introduced by these enablers, and associated mitigation techniques. Additionally, this talk highlights how AI/ML can help enhance security features of these networks and elaborates some adverse effects of AI/ML. Finally, the talk introduces some of the ongoing activities within various standards communities including open source consortiums, large scale NSF testbeds, and illustrates a few deployment use case scenarios.
Local Ratio based Real-time Job Offloading and Resource Allocation in Mobile Edge Computing
ABSTRACT. Mobile Edge Computing (MEC) has emerged as a promising paradigm enabling vehicles to handle computation-intensive and time-sensitive applications for intelligent transportation. Due to the limited resources in MEC, effective resource management is crucial for improving system performance. While existing studies mostly focus on the job offloading problem and assume that job resource demands are fixed and given apriori, the joint consideration of job offloading (selecting the edge server for each job) and resource allocation (determining the bandwidth and computation resources for offloading and processing) remains underexplored. This paper addresses the joint problem for deadline-constrained jobs in MEC with both communication and computation resource constraints, aiming to maximize the total utility gained from jobs. To tackle this problem, we propose an approximation algorithm, IDAssign, with an approximation bound of 1/6, and experimentally evaluate the performance of IDAssign by comparing it to state-of-the-art heuristics using a real-world taxi trace and object detection applications.
NAMP: A Network-Aware Model Partitioning Framework for Constrained Devices
ABSTRACT. Despite the growing computing capacity of sensors, enabling them to execute Machine Learning (ML) models without requiring cloud resources is still a challenge given their limited computational power, memory, and energy. Current ML frameworks for developing, deploying and executing ML models typically overlook these limitations and consider monolithic models that run in single-devices. This paper proposes the Network-Aware Model Partitioning (NAMP) framework that allows breaking neural networks into smaller submodels considering the resources available in the system. The submodels are deployed as services on a network of constrained devices to provide distributed inferences. Two emulated scenarios with 15 devices with limited memory show NAMP deploying a distributed LeNet-based model that would be unfeasible if monolithic. Comparative analysis shows the NAMP-derived solution minimises communication overhead, increasing inference efficiency.
ABSTRACT. A brief opening to welcome participants, introduce the workshop on computationally aware algorithmic design for cyber-physical systems, and outline the day's agenda.
Scaling Safe Control Synthesis with Logical Specifications and Neuro-Symbolic Methods
ABSTRACT. Temporal logic has become an increasingly popular formalism for expressing safety and performance specifications of autonomous cyber-physical systems such as aerial and ground robots. Designing controllers for such applications is increasingly done in a data-driven fashion, where instead of obtaining a closed-form symbolic representation of the system dynamics, we either design controllers directly based on system simulations or from surrogate models trained on data obtained from the system. Furthermore, to handle the nonlinear and uncertain environment, controllers are often modeled as neural networks as well. Training a controller is often done using stochastic gradient descent techniques. In such a paradigm, it is important to effectively model the temporal logic-based specification in a way that is amenable to gradient-based computation. We will discuss how we can encode discrete-time Signal Temporal Logic (STL) specifications exactly using a neural network. We will show how our method is computationally efficient while preserving the semantics of the temporal logic specifications. We will discuss application of this technique to training controllers and verifying the closed-loop safety properties for the system.
Perception, Control, and Planning under Communication and Computation Resource Constraints for Intelligent CPS
ABSTRACT. Future autonomous agents will operate collaboratively in large numbers to achieve redundancy and resiliency. Practical constraints require these agents to be small with limited computational and communication resources. However, current algorithms that emulate "human-like" decision-making and perception are data-intensive and computationally demanding. We argue that the prevailing data-intensive paradigm for autonomous CPS systems is neither relevant nor necessary for high-performance intelligent agents operating in complex, uncertain environments at high speeds. The human brain efficiently processes vast amounts of data by focusing computational resources on relevant aspects in a top-down, task-specific manner. In this talk, we will present our work on generating task-specific abstractions for joint perception/planning problems in robotics, minimizing cognitive overload by focusing on salient perceptual cues. We will demonstrate how these abstractions enable communication-aware planning for resource-limited robot teams. Finally, we will introduce an approach for communication- and computation-aware design in linear quadratic and multi-agent control problems using quantized measurements.
A Model-Driven Approach for Safety-Security Co-Analysis blending Formal Methods and Generative AI
ABSTRACT. When it comes to designing complex systems of systems, model-driven engineering is one of the best approaches, as it enables engineers to manage abstraction and perform formal reasoning on requirements — often through model transformations into formalisms that support model checking (e.g., timed automata, Petri nets, etc.). However, these engineering and verification processes come with a high entry cost for engineers and involve both time-intensive steps (such as modeling and iterative model modifications) and computationally expensive procedures (such as iterative verification).This talk will present a security-oriented model-driven engineering process that integrates SysML modeling, formal verification, optimized model-checking algorithms, and generative AI assistance to enhance its applicability in real-world engineering contexts. It will also discuss lessons learned from the research efforts behind the definition of this process, including theoretical contributions and tooling developments.
ABSTRACT. The growing interest in personalized medicine is set to transform conventional healthcare, offering new avenues for predictive analytics and tailored treatment approaches. In this talk, I will present our advancements in developing wearable biosensors for non-invasive molecular analysis. These wearables autonomously access and sample body fluids, such as sweat, wound exudate, and exhaled breath condensate, continuously monitoring a wide array of analytes—including metabolites, nutrients, hormones, proteins, and drugs—during various daily activities. To enable large-scale, cost-effective manufacturing of these high-performance nanomaterial-based sensors, we leverage techniques such as laser engraving, inkjet printing, and 3D printing. Our wearable systems' clinical applications are evaluated through human trials in areas like human performance monitoring, stress response and mental health assessment, precision nutrition, chronic disease management, and drug personalization. Furthermore, I will explore our efforts in energy harvesting from both the human body and the environment, paving the way for battery-free, wireless biosensing devices. This integration of wearable technologies has the potential to revolutionize personalized healthcare, spanning diagnostics, real-time monitoring, and therapeutic innovations.
MobiVital: Self-supervised Quality Estimation for UWB-based Contactless Respiration Monitoring
ABSTRACT. Respiration waveforms are increasingly recognized as important biomarkers, offering insights beyond simple respiration rates, such as detecting breathing irregularities for disease diagnosis or monitoring breath patterns to guide rehabilitation training. Previous works in wireless respiration monitoring have primarily focused on estimating respiration rate, where the breath waveforms are often generated as a by-product. As a result, issues such as waveform deformation and inversion have largely been overlooked, reducing the signal's utility for applications requiring breathing waveforms. To address this problem, we present a novel approach, MobiVital, that improves the quality of respiration waveforms obtained from ultra-wideband (UWB) radar data. MobiVital combines a self-supervised autoregressive model for breathing waveform extraction with a biology-informed algorithm to detect and correct waveform inversions. To encourage reproducible research efforts for developing wireless vital signal monitoring systems, we also release a 12-person, 24-hour UWB radar vital signal dataset, with time-synchronized ground truth obtained from wearable sensors. Our results show that the respiration waveforms produced by our system exhibit a 7-34% increase in fidelity to the ground truth compared to the baselines and can benefit downstream tasks such as respiration rate estimation.
Through-dressing Wound Monitoring Based on the mmWave Sensor
ABSTRACT. Wound assessment is crucial for monitoring healing, but traditional methods require removing gauze, which can disrupt healing and increase infection risk. We introduce WavelyVision, an over-gauze wound assessment system based on the mmWave sensor. It detects skin moisture—a key wound condition indicator—by analyzing how mmWave signals change with moisture levels. To improve accuracy, WavelyVision uses a denoised imaging algorithm to reduce motion noise and separate skin signals from environmental interference. A physical model further enhances moisture estimation. Experiments show WavelyVision achieves high accuracy, with a moisture error of about 0.5%% and an SSIM of about 0.9. These results demonstrate its potential for non-invasive wound monitoring.
Fine-grained Heartbeat Waveform Monitoring with RFID: A Latent Diffusion Model
ABSTRACT. Cardiovascular disease remains a leading cause of mortality, necessitating continuous heart rate monitoring for early detection and prevention. Contactless technologies have gained significant attention due to their noninvasive and user-friendly nature. This paper introduces a contactless heartbeat waveform monitoring system utilizing commercial off-the-shelf (COTS) radio frequency identification (RFID) devices. The system extracts heartbeat signals by calculating the phase difference between two RFID tags placed on the body. To separate overlapping heartbeat and respiratory signals, empirical mode decomposition (EMD) is employed. More importantly, we propose a latent diffusion model that combines a variational autoencoder (VAE) and a denoising diffusion probabilistic model (DDPM) to recover the fine-grained heartbeat waveform. Experimental results show that our system achieves heart rate monitoring with a cosine similarity of 0.88 compared to the ground truth of an electrocardiogram (ECG).
Keynote: Bringing AI Up to Speed – Designing for Pushing the Limits
ABSTRACT. Pushing autonomous systems to their operational limits reveals critical insights into the intersection of AI, real-time decision-making, and high-performance control. In this talk, I will discuss the design, architecture, and operational challenges behind Cavalier Autonomous Racing - world-record-holding autonomous Indy race car, which competes at the highest levels of AI-driven motorsport. We will explore the critical engineering decisions that enable autonomy at the edge of performance -balancing perception, planning, and control under extreme dynamic constraints. From high-fidelity simulation to real-world deployment, I will highlight the iterative co-design of software and hardware that allows an autonomous vehicle to operate at speeds exceeding 170 mph while maintaining safety and precision. The talk will also cover the unique challenges of system integration, from sensor fusion and real-time computing to vehicle dynamics and predictive control. Through this lens, autonomous racing offers insights that extend beyond the track, informing the design of next-generation AI-driven mobility solutions.
Simulation vs. Hallucination: Assessing Vision-Language Model Question Answering Capabilities in Engineering Simulations
ABSTRACT. Engineering simulations generate complex multimodal data that are crucial for design iteration and validation. The interpretation of these simulations traditionally requires significant domain expertise and cognitive effort. Recently, vision-language models (VLMs) have demonstrated impressive capabilities in general-domain multimodal reasoning tasks, offering the potential for automating simulation data interpretation. However, the effectiveness of these models in specialized engineering contexts remains largely unexplored. This paper presents an initial comparative evaluation of state-of-the-art VLMs on question answering tasks involving structural and fluid dynamics simulation data across three modalities: text, images, and videos. In doing so, we introduce a domain-specific benchmark dataset comprising true/false questions testing comprehension for engineering simulations. Unlike general-purpose multimodal benchmarks, our evaluation focuses specifically on the technical interpretation of engineering simulation outputs, requiring specialized domain knowledge and physical reasoning that is absent from broader multimedia assessments. Our results demonstrate that text modality yields substantially higher performance (up to 69.2\% accuracy with GPT-4o, 66.3\% with LLaVA) than visual inputs (52.9-55.8\%), with GPT-4o, LLaVA, and Phi-3 exhibiting the strongest capabilities for text comprehension. Models on average performed better on structural analysis tasks than fluid dynamics problems, with minimal advantage observed for native video processing over batched image approaches. However, reliability is still far below that needed for engineering applications, highlighting significant challenges in applying current VLMs to the interpretation of engineering simulations.
Empirical Assessment of Graph Neural Network Convolution Operators for AC-OPF Learning
ABSTRACT. The AC Optimal Power Flow (AC-OPF) problem plays a crucial role in optimizing power generation dispatch to meet customer demand and also serves as a foundation for several downstream applications. However, the complexity and non-convex nature of the problem makes it a challenge for traditional physics-based methods to solve the AC-OPF problem in a timely manner. Machine learning models, particularly Graph Neural Networks (GNNs), have emerged as a promising solution, demonstrating superior performance across varying electricity transmission network topologies. Despite the availability of many GNN convolution operators, there is limited research comparing their effectiveness in solving the AC-OPF problem. This paper conducts an empirical evaluation of seven homogeneous GNN convolution operators, spanning a variety of graph data processing techniques. The operators are compared on the basis of losses (mean squared error and constraint violation loss), convergence rate, and computational cost (training time). The findings reveal that GINConv strikes the best balance between losses and computational cost, while the inclusion of edge features in the convolution operation generally improves model convergence behavior.
Keynote: Arguing at the Compliance Interface: Dispatches from Industrial Experience
ABSTRACT. Aircraft certification is based on behavioral predictability consistent with the safety intent of applicable regulations. To establish and agree that requisite predictability, applicants show, and approvers find, compliance with recognized standards and additional negotiated means and methods. Thus, compliance is the context in which a safety assertion, and its often implicit supporting argument, is framed and evaluated in practice. Where an emerging assurance method, or an emerging capability, is not accounted for in the existing compliance context, new arguments must be brought forth, and their integration into the compliance context must be worked out, raising numerous technical, regulatory, and practical considerations. This talk offers observations collected from industrial experience in creating and negotiating these arguments that enable both the assurance of new capabilities, and the use of new methods of compliance in associated approvals.
The Power Struggle: Balancing Peripherals & Programmer Effort in Batteryless Sensing
ABSTRACT. Batteryless, energy-harvesting devices enable deeply embedded sensing and
computing deployments without the size, weight, or maintenance constraints of
batteries. The smallest of these devices harvest energy into a capacitor
bank to support short bursts of execution at a time, with power failures in
between. Peripheral sensors and radios are essential to batteryless device
deployments: they allow a device to sense and report information about its
surroundings. However, peripherals trigger concurrent accesses to memory and
account for a large percentage of the total device energy budget. The challenge
for programmers is a tight coupling emerges between device hardware
characteristics and application performance. A software developer needs to have
an understanding of both to write programs that behave as expected. This talk
will discuss strategies to reduce the burden of integrating peripherals into
batteryless applications by managing the shared state between peripherals and
the primary microcontroller.
The transition to Software-Defined Vehicles (SDVs) necessitates a shift from distributed to high-performance more centralized and zonal E/E architecture, driven by the requirements for continuous updates, better user experience, and advanced functionalities. This transformation complicates the integration of safety-critical applications due to the rapidly increasing demands on software and hardware. Traditional OEMs, relying on various suppliers, face challenges in deployment with frequent updates, resource optimization, and managing different ECU/vehicle variants. Many existing solutions lack flexibility, leading to increased testing efforts, unreliable critical functions, and timeline delays. Addressing these issues require formal description of the SW functions and communication, automation in deployment, the separation of safe and unsafe software, predictable resource utilization, and dynamic configuration of communication networks with end-to-end guarantees. This event invites presentations aiming at solutions for these challenges as well as extended issues that safety-critical and real-time systems encounter in the automotive industry.
Keynote: Middleware for Autonomous Driving Systems
ABSTRACT. As the automotive industry accelerates toward fully autonomous driving, the role of middleware becomes increasingly critical. This keynote explores the evolving landscape of automotive middleware as the foundational software layer that enables communication, scheduling, and computation across diverse vehicle subsystems. We will examine the architectural challenges in supporting real-time data exchange, safety compliance, and heterogeneous hardware integration in autonomous vehicles. The presentation highlights key industry standards, recent advancements in middleware platforms, and their impact on system reliability, scalability, and interoperability.
Managing MPSoC Memory Interference on SDV Architectures
ABSTRACT. The next generation of zonal and central automotive computing units require the adoption of high-performance, low-power heterogeneous multiprocessor systems-on-chip (MPSoCs). Through the integration of CPUs, GPUs, AI accelerators, and real-time cores, MPSoCs provide unprecedented computation capabilities at efficient power levels. Unsurprisingly therefore, they are becoming ubiquitous in safety-critical domains such as software-defined vehicles (SDVs) and autonomous systems. MPSoCs (e.g., AMD Ultrascale+ ZCU102) rely on a shared memory hierarchy to maximize efficiency and performance, but this introduces contention among processing elements. Such contention increases the timing variability of real-time tasks that must compete at caches, interconnects, and DRAM levels. Such effects endanger mixed-criticality systems, where timing guarantees for safety-critical applications can be affected by aggressive best-effort workloads. In addition, interference might lead to violating separation properties that are the foundation of many certification processes. To manage the complexity of modern MPSoCs, hardware vendors have introduced several quality-of-service (QoS) mechanisms (e.g, Intel RDT, Arm MPAM) that allow system integrators to monitor and optimize performance. However, as highlighted in recent studies, correctly configuring these hardware regulation mechanisms is a major challenge due to their complexity, interactions, lack of widespread adoption, and insufficient documentation. Instead, several studies have leveraged the monitoring capabilities provided by MPSoCs (PMUs) to develop software-based regulation strategies that have proven effective in mitigating certain types of memory interference. Nonetheless, given the number of controlling knobs and their complex interactions, finding the best configurations to achieve isolation and performance remains a major challenge. This talk addresses solution around those challenges.
This tutorial introduces Playground, an open-source "safe" operating system (OS) abstraction for buildings that enables the execution of untrusted, multi-tenant applications in modern buildings. Playground is integrated with the Brick representation of the underlying buildings and features flexible and extensible access control and resource isolation mechanisms. This tutorial will provide a detailed walkthrough of the system design of Playground and relevant background with multiple hands-on exercises.
The overall theme of this tutorial is on designing formal verification and control algorithms for learning-enabled cyber-physical systems (LE-CPSs) with practical safety guarantees by using conformal prediction.
The 4th student design competition on Networked Computing on the Edge. This competition invites student teams of all levels to develop and demonstrate innovative projects on the topic of networked computing for edge applications. Projects of integrated computing, control, and communication components on the ground, underwater or air mobile platforms are welcomed. Topics of interest include but are not limited to unmanned aerial vehicle (UAV) networks, urban aerial mobility, autonomous driving, edge computing, and human-machine interfaces. Projects on the development of UAV applications are especially encouraged.
In this first session, “Making your code do the talking,” participants will learn to use tools for managing and documenting their code, running it effectively, enhancing its readability, enhancing its ability to be set up, and preparing it for going public.
ABSTRACT. CPS code and data has its nature of embedding with specific software and hardware. This tutorial aims to address key challenges in CPS, including: (1) sharing, executing code and handling data across diverse software and hardware environments, and (2) accelerating testing and validation processes using virtual testbeds, eliminating the need for physical setups. In this talk participants will review the state of reproducibility learn how to organize and document their code repository.
Toward Foundation Models for Online Complex Event Detection in CPS-IoT: A Case Study
ABSTRACT. Complex events (CEs) play a crucial role in CPS-IoT applications, enabling high-level decision-making in domains such as smart monitoring and autonomous systems. However, most existing models focus on short-span perception tasks, lacking the long-term reasoning required for CE detection. CEs consist of sequences of short-time atomic events (AEs) governed by spatiotemporal dependencies. Detecting them is difficult due to long, noisy sensor data and the challenge of filtering out irrelevant AEs while capturing meaningful patterns. This work explores CE detection as a case study for CPS-IoT foundation models capable of long-term reasoning. We evaluate three approaches: (1) leveraging large language models (LLMs), (2) employing various neural architectures that learn CE rules from data, and (3) adopting a neurosymbolic approach that integrates neural models with symbolic engines embedding human knowledge. Our results show that the state-space model, Mamba, which belongs to the second category, outperforms all methods in accuracy and generalization to longer, unseen sensor traces. These findings suggest that state-space models could be a strong backbone for CPS-IoT foundation models for long-span reasoning tasks.
Towards Zero-shot Question Answering in CPS-IoT: Large Language Models and Knowledge Graphs
ABSTRACT. Natural language provides an intuitive interface for querying data, yet its unstructured nature often makes precise retrieval of information challenging. Knowledge graphs (KGs), with their structured and relational representations, offer a powerful solution to structuring knowledge, while large language models (LLMs) are capable of interpreting user intent through language. This combination of KGs and LLMs has been explored extensively for Knowledge Graph Question Answering (KGQA), though most approaches focus on open-domain or encyclopedic knowledge. However, domain-specific KGQA presents significant opportunities for Cyber-Physical Systems (CPS) and the Internet of Things (IoT), where extraction of structured metadata is essential for automation and scalability of control and analytic applications.
In this work, we evaluate and improve AutoKGQA, a domain-independent KGQA framework that utilizes LLMs to generate structured queries. Through a case study on KGs of sensor data from buildings, we assess its ability to retrieve time series identifiers, which are a priori for extracting time series data from large sensory databases. Our results demonstrate that, while AutoKGQA performs well in certain cases, its domain-agnostic approach leads to systematic failures, particularly in complex queries requiring implicit knowledge. We show that domain-specific prompting significantly enhances query accuracy, allowing even smaller LLMs to perform on par with larger ones. These findings highlight the impact of domain-relevant prompting in KGQA (DA-KGQA) and suggest a path toward more efficient, scalable, and interpretable AI-driven metadata retrieval for CPS-IoT applications.
Exploring the Capabilities of LLMs for IMU-based Fine-grained Human Activity Understanding
ABSTRACT. Human activity recognition (HAR) using inertial measurement units (IMUs) increasingly leverages large language models (LLMs), yet existing approaches focus on coarse activities like walking or running. Our preliminary study indicates that pretrained LLMs fail catastrophically on fine-grained HAR tasks such as air-written letter recognition, achieving only near-random guessing accuracy. In this work, we first bridge this gap for flat-surface writing scenarios: by fine-tuning LLMs with a self-collected dataset and few-shot learning, we achieved up to a 129x improvement on 2D data. To extend this to 3D scenarios, we designed an encoder-based pipeline that maps 3D data into 2D equivalents, preserving the spatiotemporal information for robust letter prediction. Our end-to-end pipeline achieves 78% accuracy on word recognition with up to 5 letters in mid-air writing scenarios, establishing LLMs as viable tools for fine-grained HAR.
Towards Trustworthy XR: Safety, security, and privacy concerns in XR in the era of AI
ABSTRACT. The convergence of artificial intelligence (AI) and extended reality (XR) technologies (AI XR) holds great promise for innovative applications in many domains. However, the sensitive nature of data used by these AI XR applications introduces significant security and privacy concerns. In this talk, I will present the safety, security, and privacy challenges in AI XR applications and principles from our recent efforts on how to address these challenges. I will conclude with a discussion of the open problems and opportunities and outline areas for defensive research in the future.
Toward Mobile AI Systems with Physical World Perception
ABSTRACT. AI tools like ChatGPT have sparked a shift in how we envision personalized AI agents—systems capable of assisting humans across both digital and physical domains. Realizing this vision on mobile and embedded platforms introduces several challenges: (1) accurate perception and understanding of the physical world, (2) efficient execution on resource-constrained devices, and (3) responsible handling of user data and interaction context.
In this talk, I will discuss these challenges through the lens of mobile AI and sensing systems, particularly focusing on our work using smartphones for human behavior understanding. I will present practical approaches for perception-driven AI, lightweight model design, and federated learning in sensing, offering insights into building intelligent, context-aware systems that operate reliably in the real world.
Attacking mmWave Sensing with Meta-material-enhanced Tags
ABSTRACT. As mmWave sensing becomes a cornerstone of robotics, autonomous vehicles and smart infrastructure, its security vulnerabilities remain largely unaddressed. In this talk, I will present MetaWave—a stealthy, low-cost attack framework that uses metamaterial-enhanced tags to deceive mmWave sensors by either hiding real objects or generating fake ones. Unlike traditional spoofing techniques that rely on costly RF equipment, MetaWave leverages easily-fabricated, passive materials to perform Vanish and Ghost attacks with remarkable effectiveness. Achieving over 90% success rates in range, angle, and speed spoofing, MetaWave exposes a critical and overlooked weakness at the physical layer. This talk will uncover the mechanics of the attack, its broader implications for safety and privacy, and potential strategies for defense.
mmVanish: Extending the Vanish Attack for Multi-Radar Exploitation of mmWave Sensing with Meta-material Tags
ABSTRACT. Millimeter-wave (mmWave) radar systems are widely used for object detection and tracking in critical applications, but recent research has exposed their vulnerability to vanish attacks. This paper extends the vanish attack – previously demonstrated on a single radar – to multiple radar systems operating at 24GHz, 66GHz, and 77GHz. We utilize meta-material-enhanced tags to interfere with radar signals passively, effectively rendering real objects undetectable. We describe a methodology for designing and deploying these low-cost meta-material tags and evaluate the attack’s effectiveness across different radar frequencies. Experimental results show that the meta-material tags successfully cause objects to “disappear” from all three radar systems, drastically reducing detection accuracy at each frequency. The study’s contributions include demonstrating the generalizability of the vanish attack across diverse mmWave radar bands, analyzing performance differences by frequency, and discussing countermeasures. Our findings confirm that meta-material tag attacks pose a serious threat to mmWave sensing, underscoring the need for improved radar security and multi-sensor defenses.
Invited talk: Trusted Time for Untrusted Edge Systems
ABSTRACT. Time provides a network of computing devices in edge systemswith the capability to timestamp, schedule, coordinate, and orderevents. This pervasive use of time makes it a lucrative target forattackers to acquire plentiful incentives. Even secure applicationswithin shielded execution environments can be manipulated withouta trustworthy clock source. This paper addresses the securityof a fundamental primitive, a notion of time, that is essential for thesafe and correct operation of all hardware, software, and networkedsystems. In the era of zero-trust environments, where privileged entitiessuch as an operating system lie outside the trusted computingbase, we note that all components in a time stack are vulnerable,which widens the attack surface for time-based services and givesrise to novel timing attacks. This research provides comprehensivedesign of a secure, precise, resource-efficient, and extensible timingarchitecture for edge systems in the presence of new timing vulnerabilities.This requires new system design, computation paradigms,algorithms, and network protocols to achieve the overarching visionfor a secure time architecture.
Invited talk: AI for Cyber-Physical Systems: Identifying the Valuable Gaps
ABSTRACT. AI is rapidly changing the way we interact with and reason about digital data but making far slower advancements with regards to how we interact with the physical world. Using AI to reason about cyber-physical systems helps to close this gap, but simply transferring existing models and techniques into real-time systems will, at best, yield underwhelming results. This talk will define several incompatibilities between current mainstream AI workflows and the requirements of cyber-physical systems, including system interfaces, data availability and real-time requirements. Using factory automation systems as a recurring example, this talk will explore the feasibility of different approaches and propose several research questions that would usability of AI for real-time, edge systems.
ABSTRACT. Virtualizing I/O devices presents unique challenges compared to other system resources.Traditional approaches rely on software-based abstraction layers, which can be particularly complex to develop and often fail to provide guests with efficient access to the underlying hardware. The most common solutions involve either dedicating an I/O device exclusively to a single guest or modifying/patching the guest software to utilize hardware-level I/O multiplexing. This paper introduces I/O Softwareless Nano-Virtualization (IO-SNV), a novel approach that achieves efficient and transparent I/O virtualization with minimal overhead and no software modifications. IO-SNV operates entirely at the hardware level, leveraging programmable logic to dynamically virtualize I/O devices while maintaining high performance. We present the conceptual model, a proof-of-concept implementation, and an evaluation of its feasibility. Our promising results demonstrate that IO-SNV can provide seamless I/O virtualization while preserving device access efficiency, making a compelling alternative to existing software-centric solutions.
Adaptive Intrusion Mitigation in Software-Defined Vehicles Using Deep Reinforcement Learning
ABSTRACT. Software-defined vehicles (SDVs) leverage vehicle-to-everything (V2X) communication to enable advanced connectivity and autonomous driving capabilities. However, this increased interconnectivity also exposes them to cyber threats such as spoofing, denial-of-service attacks, and data manipulation, making intrusion detection systems (IDS) essential for ensuring SDV security and reliability. In this work, we propose a novel intrusion mitigation approach that integrates Advantage Actor-Critic (A2C) reinforcement learning with a Long Short-Term Memory (LSTM) network to detect anomalies and intrusions in V2X communications. The LSTM component captures temporal dependencies in V2X data, enhancing the model’s ability to identify emerging attack patterns, while the A2C framework dynamically adjusts defensive actions, including flagging, blocking or monitoring traffic, based on evolving threat levels. Experimental results demonstrate the model’s effectiveness, achieving high detection accuracy and sensitivity. Additionally, we analyze how the system adapts over time, becoming more confident in its decision-making and optimizing security enforcement. This work enhances SDV cybersecurity by introducing a learning-based adaptive intrusion response system aiming at mitigating threats in highly dynamic vehicular networks.
ABSTRACT. Recent groundbreaking advances in computer vision, AI, and control theory have revolutionized the capabilities of autonomous robots in accomplishing diverse tasks in unknown environments such as package delivery, mapping of underground environments, surveillance, and environmental monitoring. However, with the rise of AI-enabled autonomous systems, new challenges regarding safety and reliability have surfaced. This talk particularly focuses on the safe integration of AI-enabled components for perception, and natural language processing in autonomous systems.
In the first part of the talk, I will present a safe perception-based mission planning algorithm designed for teams of mobile robots, equipped with AI-enabled perception systems, that operate in uncertain semantic environments. This algorithm enables robots to complete high-level semantic tasks (e.g., surveillance, delivery, etc.), expressed using formal languages, with user-specified probability, by actively mitigating environmental uncertainty stemming from imperfect perception. In the second part of the talk, I will discuss how this planner can be extended to account for tasks described in natural language (NL). Specifically, I will present a translation algorithm that uses pre-trained Large Language Models (LLMs) to convert NL instructions into formal specifications, which can then be used by existing planners. To enhance reliability, our approach employs statistical tools to quantify uncertainty of LLMs, providing probabilistic guarantees on translation correctness. Several case studies that include aerial, wheeled, and legged robots will be presented to demonstrate the proposed algorithms.
Towards Verified Visual Autonomy: Perception Contracts and Abstract Rendering
ABSTRACT. We address the challenge of verifying vision-based cyber-physical systems, where uncertainty in perception has long hindered formal guarantees. These systems combine modules for perception, decision-making, and control, and verifying their correctness requires tracking how sets of states propagate through the system while preserving certificates such as invariants or Lyapunov functions. While formal methods like reachability analysis can rigorously handle control and dynamics, perception components—especially image classification and rendering—have resisted formal treatment. We present perception contracts, a framework that combines formal and statistical reasoning to enable verification across sensing and perception layers. Applications in lane-keeping and autonomous landing demonstrate its effectiveness. We further introduce abstract rendering, a method for uncertainty propagation in neural scene representations (e.g., NeRFs, Gaussian splats) via compositional linear bound propagation. This approach enables end-to-end formal verification of tasks such as classification, pose estimation, and visual control. Together, we believe that these advances pave the way for a computational approach for certifiable visual autonomy.
Adapting linear Hopf reachability analysis for scalable analysis and control synthesis for nonlinear differential games
ABSTRACT. Hamilton-Jacobi (HJ) Reachability analysis is a powerful tool for solving differential games with bounded inputs; it can provide safety and liveness guarantees for each player and the corresponding optimal control law. However, control theoretic approaches to solving nonlinear differential games struggle with the “curse of dimensionality.”
Recently, the applied math community has been exploring the use of the Hopf formula for efficiently solving linear differential games with bounded inputs. We will show how we can lift a nonlinear game to a linear space wherein we can bound linearization error. We can then treat this error as an adversary in a linear game solvable by the Hopf formula, with results that can map back to the original space for conservative guarantees on the true game.
Time-permitting, we will also touch on recent work to reduce learning errors in physics-informed neural networks (PINNs) for solving HJB equations. This work was recently nominated for the Best Paper Award at the Learning for Dynamics and Control (L4DC) conference.
Adversarial Sample Generation for Anomaly Detection in Industrial Control Systems
ABSTRACT. Machine learning (ML)-based intrusion detection systems (IDS) are vulnerable to adversarial attacks. It is crucial for an IDS to learn to recognize adversarial examples before malicious entities exploit them. In this paper, we generated adversarial samples using the Jacobian Saliency Map Attack (JSMA). We validate the generalization and scalability of the adversarial samples to tackle a broad range of real attacks on Industrial Control Systems (ICS). We evaluated the impact by assessing multiple attacks generated using the proposed method. The model trained with adversarial samples detected attacks with 95% accuracy on real-world attack data not used during training. The study was conducted using an operational secure water treatment (SWaT) testbed.
ABSTRACT. Mitigating side-channel vulnerabilities in software is becoming increasingly difficult. The current state of the art in side-channel resistance is speculative constant-time compliance. However, real-world compilers do not yet provide sufficient means to achieve this. Research has mostly focused on preserving this property during compilation, and on type-level secrecy to automatically generate compliant machine code. Meanwhile, recent developments in processors, such as data-dependent prefetchers, continue to introduce secret-dependent behavior independent of the algorithm. Unlike most previous side-channels, environmental mitigations temporarily disable some of these features. At the same time, as embedded processors increasingly implement optimizations such as speculative execution, concerns previously limited to server and workstation hardware now affect the cyber-physical and Internet of Things domains.In this paper, we propose to augment existing secret-type systems with secret-memory semantics, and with secrecy code sections to automatically apply environmental mitigations. Based on current research on side-channel vulnerabilities, we conclude that OS support must be leveraged to manage secret memory and to apply these mitigations. Applying them only on demand allows secret-independent code to take advantage of novel hardware optimizations, thus avoiding a general security-performance trade-off.
Activity Recognition using RF and IMU Sensor Data Fusion
ABSTRACT. In this paper, we evaluate the potential of radio frequency data present in Bluetooth Low Energy(BLE) wireless signals to complement and improve Human Activity Recognition (HAR). HAR is prevalent in several applications catering to health, fitness, and well-being, driving the demand for robust, low-cost and efficient sensing systems. We utilized on-body wireless sensors to leverage the received signal strength indicator (RSSI) and inertial measurement unit (IMU) information to gain deeper insight into fine-grain relative positioning that reflects on hard-to-distinguish motion patterns. Our analysis shows detection accuracy of 91.91% with IMU only for up to 13 different health and fitness activities, which can be improved up to 94.54% when combining RF data with the IMU data. The data was collected with 42 participants wearing 5 wireless sensors.
Unsupervised Deep Clustering for Human Behavior Understanding
ABSTRACT. We propose Compressed-Pseudo-Temporal Enhanced Representation Learning (C-PTER), a novel unsupervised clustering framework for human-centered behavior analysis. With the growing prevalence of wearables, smartphones, and IoT devices, vast amounts of human activity data are collected in real-world settings, yet traditional supervised learning approaches require extensive manual labeling, making them impractical for large-scale deployment. Existing deep clustering methods, such as autoencoder-based approaches, often fail to capture temporal dependencies and struggle with noisy sensor readings, leading to suboptimal clustering performance. In contrast, C-PTER integrates pseudo-temporal feature extraction with a parallel CNN-LSTM autoencoder, enabling robust spatial-temporal representation learning. By leveraging compressed feature extraction, our method enhances cluster compactness and inter-cluster separation, significantly improving clustering performance on real-world human activity datasets. We demonstrate that C-PTER outperforms both classical (k-means) and deep clustering baselines (DSC) across three Inertial Measurement Unit (IMU) benchmark datasets (MUser, UCI HAR, MHEALTH), achieving up to 30% improvement in normalized mutual information (NMI) and 21% in accuracy (ACC). These results validate C-PTER as a scalable and effective solution for clustering unsupervised human behavior.
Human-Centered Gait Balance Estimation Using Footstep-Induced Floor Vibrations
ABSTRACT. Gait balance is a critical indicator of neurological and neuromuscular conditions, such as stroke and Parkinson’s disease. Estimating multiple balance features (e.g., the left/right pressure balance in the forefoot, rearfoot, and entire foot) often provides clinicians with insights that facilitate more strategic and personalized treatment plans. Existing approaches that primarily rely on pressure mats, wearable devices, camera-based systems, or direct clinical observation often impose constraints on user experience, such as discomfort, the need for active participation, and potential privacy issues. In contrast, recent advances have demonstrated that floor vibrations captured by geophone sensors can effectively reflect gait dynamics.
In this paper, we introduce a human-centered sensing method for estimating multiple gait balance features using floor vibrations generated by human walking. Our approach is non-intrusive and easily deployable, offering significant user experience advantages compared to traditional methods. The key research challenges in developing this method include: (1) the difficulty of capturing the complex relationships between floor vibrations and gait characteristics, and (2) task interference (i.e., conflicting parameter updates when simultaneously estimating multiple gait balance features). To address the first challenge, we leverage human walking biomechanics to develop a dual-level self-attention network that captures features representing various gait phases. To mitigate task conflicts in the multi-task model, we design both shared layers and task-specific branches in our neural network model, enabling it to extract common features while preserving task-specific characteristics. We evaluated our system through real-world walking experiments with 14 participants. Our system achieved a gait balance classification accuracy of 98.1% and improved total pressure estimation by about 16% over baseline methods. This demonstrates that our human-centered balance estimation system provides reliable results while being user-friendly.
Wireless Sensing of Gait for Neurodegenerative Disease Assessment: A Scoping Review
ABSTRACT. Early diagnosis of neurodegenerative diseases is a significant public health challenge, yet crucial for better prognoses. Gait analysis accomplishes this task by detecting abnormal motion or declining motor control, which are strong early indicators of neurodegeneration. A new healthcare paradigm, shifting from clinical settings to home-based approaches, promotes less intrusive and privacy-preserving monitoring solutions. In this context, wireless gait analysis methods, such as radars and commercial cameras, are well-suited for home-based neurodegenerative disease assessment. These non-contact sensing technologies, which do not require physical markers or direct interaction with the subject or environment, offer a less cumbersome alternative to traditional methods while maintaining reliability. We conducted this scoping review study to examine wireless sensing solutions for gait analysis in neurodegeneration assessment. In accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Review (PRISMA-ScR) guidelines, searches were conducted on the Scopus, PubMed, IEEE, and ACM databases. 16 of 139 articles were included in this review, which were evaluated for sensor choices, gait features, means of analysis, and the investigated condition. Studies indicated that the RGB/depth cameras and radars were effective means of capturing real-time gait data. Step length and speed were found to be the most accurate discriminators among several gait parameters for data analysis in this study, regardless of the sensing approach chosen. Various descriptive statistical methods and data-driven models have been explored as the analytical tools for assessing neurodegeneration-affected gait. By evaluating various approaches to wireless sensing-based gait analysis in this scoping review, we discuss and highlight challenges and opportunities for future research.
Mitigating Sensor Data Bias from User Operational Variability via Causal Intervention
ABSTRACT. Tactile sensing is essential for human monitoring because people unconsciously touch ambient surfaces all the time. Recent development in multiplexed pressure sensor matrices enables scalable tactile sensing platforms. However, these multiplex designs can introduce sensor data biases when people interact at different locations or orientations, significantly limiting scalable, real-world deployments.
This work addresses this limitation through a causal inference framework to systematically mitigate this data bias without requiring extensive calibration or specialized hardware. By formulating the placement location as a confounding factor, our backdoor-adjustment-based solution learns a de biased representation of the sensor data, enabling accurate human interaction inference with diverse user behaviors. We demonstrate the efficacy of our approach through a case study on a pressure sensor matrix-based sensing system. The proposed solution achieves over 20% improvement in F1 score compared to baseline methods, while at the same time is tiny-weight enough to enable fully on-edge continuous model adaptation.
Touchless Restroom Monitoring: A Privacy-Preserving System for Patient Care
ABSTRACT. With the growing demand for continuous home monitoring in healthcare, developing an effective reminder system has become crucial. Restroom monitoring is a vital application for logging urination and defecation to prevent excretion issues while maintaining privacy and convenience. We introduce a touchless restroom monitoring system designed to preserve privacy and enhance patient care. Our solution is scalable, non-intrusive, and equipped with strong privacy protection, while supporting for multiple communication protocols, timely reminders, and adaptability to various restroom layouts. To assess its performance, we conducted a preliminary experiment, demonstrating that our system can accurately infer real-time activities and recognize different movements, making it a valuable tool for managing restroom-related tasks. Additionally, the touchless sensing system offers a versatile solution for other monitoring and reminder systems that focus on tracking presence and activity while prioritizing privacy preservation. The system design is available online: https://github.com/utsanpb2-234/Reminder-IoT.
Invited talk: Signal Temporal Logic-based Motion planning for Multi-Robot Systems with Complex Objectives
ABSTRACT. Safe planning and control of multi-robot performing complex tasks has been a challenging problem. Methods that offer guarantees on safety and mission satisfaction generally do not scale well. On the other hand, more computationally tractable approaches do not offer much in terms of safety guarantees. In this talk, I will present a family of robust and predictive motion planning and control methods that overcome these limitations for a wide variety of task objectives, represented using Signal Temporal Logic (STL). Starting from the given STL specification, we formulate a non-convex optimization problem, which can be efficiently solved to local optimality in both centralized and decentralized manners. We also formulate constraints which result in trajectories that can be tracked near perfectly by off-the-shelf lower level controllers. The performance and scalability of the methods will be demonstrated through multi-robot simulation studies and experiments on quadrotor aerial robots and non-holonomic ground robots. Finally, I will present ongoing work on extending these methods to systems with partially known dynamics.
Finding Unknown Unknowns using Cyber-Physical System Simulators
ABSTRACT. Simulation-based approaches are among the most practical means to search for safety violations, bugs, and other unexpected events in cyber-physical systems (CPS). Where existing approaches search for simulations violating a formal specification or maximizing a notion of coverage, in this work we propose a new goal for testing: to discover unknown rare behaviors by examining discrete mode sequences. We assume a CPS simulator outputs mode information, and strive to explore the sequences of modes produced by varying the initial state or time-varying uncertainties. We hypothesize that rare mode sequences are often the most interesting to a designer, and we develop two accelerated sampling algorithms that speed up the process of finding such sequences. We evaluate our approach on several benchmarks, ranging from synthetic examples to Simulink diagrams of a CPS, demonstrating in some cases a speedup of over 100x compared with a random sampling strategy.
Autonomous Cybersecurity Testbed for Operational Technology Networks
ABSTRACT. In the ongoing era of Fourth Industrial Revolution (4IR), where the manufacturing and industrial processes are being integrated with information technologies including Artificial Intelligence (AI), advanced testbeds are a must to ensure an error-free, scalable and gradual transition. The network-integrated Operational Technology (OT) used in manufacturing industries also requires advanced testbeds to test and evaluate cybersecurity agents deployed to mitigate adversarial attacks. In this paper, we describe a testbed designed for testing and evaluating attack-resilient network agents, also known as blue agents. Using open-source software stacks and network protocols, we develop an easy-to-deploy, OpenStack-based network emulator that closely emulates industrial use cases. We also demonstrate the utility of this testbed by initiating various attack types that lead to OT plant instability. The various stages of the experiments, leading to this instability, can be detected and recorded using the measurement tools available within the testbed.
From Toy to Target: Investigating Representation Transfer for Reinforcement Learning with Implications for Cyber-Physical Systems
ABSTRACT. As Cyber-Physical Systems (CPS) become more common and more complex, training reinforcement learning (RL) agents to perform well in these large-scale environments remains both challenging and computationally expensive. This paper proposes the Toy Transfer Method (TTM) as a potential approach for leveraging knowledge acquired in small-scale toy environments to expedite agent learning in larger environments. The key idea is that an RL agent trained in a well-structured toy environment may learn useful representations that can be transferred to a more complex target environment, expediting training and improving agent efficacy.The Toy Transfer Method is evaluated using OpenAI's Taxi environment as a case study, transferring knowledge from a 5x5 grid world to multiple 7x7 grid worlds with a roughly twice as large state space. The results demonstrate that the TTM-enhanced Deep Q-Network (DQN) agent consistently outperforms a baseline DQN agent trained from scratch, achieving faster convergence and higher average rewards. Furthermore, the TTM-enhanced agent often leads to convergence in cases where the baseline agent fails to converge to a successful policy. These results suggest that environment abstraction and transfer learning may be viable strategies for improving RL efficiency in CPS, especially when the toy and target environments are structurally similar.
ABSTRACT. Formal arguments for CPS certification depend on modeling the CPS and formally verifying the CPS. The correctness properties that we are interested in formally verifying are typically quite diverse (e.g., prove correct timing, prove that no message is lost, prove that assertions in source code are true). Consequently, it is typically not possible to state a single model of the system and state all correctness properties in a single language and then perform the verification. Instead, it is typically necessary to create different verification models for different correctness properties, and then for each verification model invoke a formal verification tool that proves the correctness properties related to this verification model. When doing so, however, there may be dependencies between different formal verification methods/tools; thus, the guarantee provided by one formal verification method/tool may be an assumption for another formal verification method/tool. Thus, there may be dependencies between analyses. Such dependencies may be circular. In this paper, we show that for such formal verification with circular dependencies, the outcome may depend on initial assumptions, hence illustrating the importance of starting in a good place
ABSTRACT. Assurance of evolving large cyber-physical systems (CPS) is time consuming, and usually a bottleneck for deploying them with confidence. Several factors contribute to this problem, including the lack of effective reuse of assurance results, the difficulty to integrate multiple analyses for multiple subsystems, and the lack of explicit consideration of the different levels of trust that different analyses provide. In this paper, we present an approach to assure large CPS that aims to overcome these barriers
Assurance Cost Reduction Through Architectural Design
ABSTRACT. An approach is proposed for designing a system to reduce the cost of building an assurance case. The key idea is to design an architecture to separate and minimize components that are responsible for establishing a critical system requirement. The approach is illustrated using an example involving a radiation therapy system, along with a discussion of possible research directions to enable a design methodology for assurance cost reduction.
Reverse Engineering the ESP32-C3 Wi-Fi Drivers for Static Worst-Case Analysis of Intermittently-Powered Systems
ABSTRACT. The Internet of Batteryless Things revolutionizes sustainable communication as it operates on harvested energy. This harvested energy is dependent on unpredictable environmental conditions; therefore, device operations, including those of its networking stack, must be resilient to power failures. Reactive intermittent computing provides an approach for solving this by notifications of impending power failures, which is implemented by monitoring the harvested energy buffered in a capacitor. However, to use this power-failure notification and guarantee forward progress, systems must break down tasks into atomic transactions that can be predictably finished before the energy runs out. Thus, static program-code analysis must determine the worst-case energy consumption (WCEC) of all transactions. In Wi-Fi-capable devices, drivers are often closed-source, which avoids the determination of WCEC bounds for transactions since static analysis requires all code along with its semantics.
In this work, we integrate an energy-aware networking stack with reverse-engineered Wi-Fi drivers to enable full-stack WCEC analysis for physical transmission and reception of packets. Further, we extended a static worst-case analysis tool with a resource-consumption model of our Wi-Fi driver. Our evaluation with the RISC-V-based ESP32-C3 platform gives worst-case bounds with our static analysis approach for the transactions of the full communication stack, therefore showing that Wi-Fi-based reactive intermittent computing is feasible.
Reception Window Selection in EDT+ for Energy Efficient NB-IoT
ABSTRACT. Existing optimizations such as EDT and EDT+ improve NB-IoT energy efficiency significantly. These improvements support battery-free or energy-harvesting NB-IoT devices and offer a promising path toward more energy-efficient deployments. In this work we consider real-world challenges that have so far not been taken into account. Specifically, when an EDT+ transmission fails, normal NB-IoT procedures are used as fallback which in turn reduces energy efficiency. In this work we present and analyse methods for dimensioning and dynamic adaptation of the BS reception window based on node clock drift and transmission schedules. Dynamic adaptation of the reception window ensures that EDT+ energy efficiency gains can be maintained while optimising channel utilisation.
ABSTRACT. This paper explores leveraging a special structure design of sensor enclosures to reduce PM2.5 sensing energy consumption. Fine-grained air quality monitoring is essential for public health and climate. However, existing air quality measurements are limited. For example, precise measurements from the regulatory station have very limited spatial coverage. The low-cost citizen science sensor network lacks coverage in socio-economically disadvantaged regions. Mobile platforms, such as vehicles or drones, provide more efficient sampling for air quality information. However, complicated vehicle power integration/retrofit and high cost make it inaccessible to the general public. Therefore, we propose SEEES, a Structure Enabled Energy-Efficient Sensing solution for PM2.5 sensing that leverages a self-adaptive elastic valve that replaces air pumps or fans to regulate the airflow input. As a proof-of-concept, SEEES has been put through preliminary simulation to verify the feasibility of battery-free PM2.5 sensing.
Modeling and Prototyping of IoT-based Long Range Self-powered Image Sensing System
ABSTRACT. This work introduces a long-range self-powered image sensing system for smart civil infrastructure, which addresses the challenges of high energy consumption and the need for frequent battery replacements. Existing camera solutions for long-range image transmission are energy intensive, while conventional self-powered systems operate in different contexts and do not fully meet the requirements of long-range image transmission. This paper examines how to enable battery-less cameras by presenting a model that quantifies energy consumption based on factors such as image resolution, transmission distance, and network selection. Using this model, a prototype design is optimized for energy-efficient long-range image transmission. The prototype achieves an energy consumption of 0.13 mJ per pixel per image transmission, making it 2.3x more efficient than current state-of-the-art solutions. By bridging theoretical modeling with practical deployment, this work enables scalable, real-time visual data collection for smart infrastructure monitoring in remote or hard-to-reach locations.
Unseen Risks in Container Adoption for SDVs: Navigating Security Challenges in Automotive Architectures
ABSTRACT. The automotive industry is increasingly adopting containerization as a means to streamline software deployment, especially with the rise of Software-Defined Vehicles (SDVs). Containers offer significant advantages in terms of flexibility, scalability, and efficient resource utilization. However, their widespread adoption also introduces a range of unseen security risks that can undermine the integrity of critical automotive systems.
The presentation will explore the hidden risks associated with using containers as isolation technologies in automotive environments, focusing on the challenges unique to SDVs.
These risks include container overprivileged configuration, unauthorized or malicious program and container escape. Additionally, it will outline strategies to address these challenges, such as container security practices, limiting privileges, ensuring image integrity, isolating networks, controlling resources, and monitoring activities/behaviors. By proactively addressing these risks, the automotive industry can better ensure the safety and resilience of containerized systems within the evolving landscape of SDVs
Keynote 2: Making the SDV a reality – changing the paradigm of automotive software development
ABSTRACT. The massive amount of software going into new vehicles means traditional automotive software architecture and development methodologies must dramatically change. This is what the Scalable Open Architecture For the Embedded Edge (SOAFEE) is solving. SOAFEE allows developers to leverage new and innovative virtual prototyping capabilities that ease functional sw validation (SiL) and regulatory approval processes long before in-vehicle hardware is available. Combined with hardware and firmware standardization through initiatives such as SystemReady, SOAFEE is enabling a “shift-left” time saving of up to two years against traditional development methods. Join us in this short talk to learn more about the initiatives and how to get involved!
Solving the Challenge of Software Interoperability in Autonomous and ADAS Systems
ABSTRACT. The automotive industry is shifting rapidly from hardware-defined to software-defined vehicles (SDVs), which use software to control vehicle operations. By 2030, analysts predict most vehicles will be software-defined.
A wide array of complex software systems are integral to these vehicles, overseeing both basic engine processes and advanced driver assistance systems (ADAS). This diversity creates significant interoperability challenges, as each component often uses different protocols, standards, and data formats. Interoperability issues are costly and can significantly delay development timelines.
System interoperability requires the timely, accurate and reliable exchange of data between software components. Ineffective software system communication can result in suboptimal performance, or worse, safety-critical failures.
Data-centric communication architectures, particularly those using DDS, are quickly becoming the preferred method for achieving interoperability in vehicle software systems. Data Distribution Service (DDS) is an international open standard that enables data interoperability through a databus that abstracts away the physical connectivity details, allowing software components to communicate regardless of the underlying hardware. This approach decouples the data from the applications, providing flexibility, portability and scalability. It also reduces latency and increases data throughput through its zero-copy capability that eliminates the need to copy data between memory locations during data transmission.
This session will discuss the challenges of software system interoperability and introduce a standards-based data-oriented communication architecture that supports the ability for software components to be developed independently, updated incrementally, and can be sourced from a supplier ecosystem. It will include case studies that show how the approach can enhance the functionality, safety, and efficiency of ADAS systems.
This tutorial introduces Playground, an open-source "safe" operating system (OS) abstraction for buildings that enables the execution of untrusted, multi-tenant applications in modern buildings. Playground is integrated with the Brick representation of the underlying buildings and features flexible and extensible access control and resource isolation mechanisms. This tutorial will provide a detailed walkthrough of the system design of Playground and relevant background with multiple hands-on exercises.
The overall theme of this tutorial is on designing formal verification and control algorithms for learning-enabled cyber-physical systems (LE-CPSs) with practical safety guarantees by using conformal prediction.
This session, titled “Creating an accessible and active resource,” is where participants will learn to structure and present their data in ways that maximize accessibility, transparency, and usability with exemplar demonstrations from the Tennessee Department of Transportation's I-24 Mobility Technology Interstate Observation Network (MOTION) and the Leveraging Advanced Data to Deliver Multimodal Safety (LADDMS) initiative.
Demonstration of posting various types of resources with the CPS Virtual Organization
ABSTRACT. Platforms supporting open resources like the Cyber-Physical Systems Virtual Organization (CPS-VO.org) can accelerate CPS discovery by providing equal access to shared testbeds, datasets and codebases to the community. This ensures that researchers, regardless of resources, can contribute to and validate findings, fostering inclusivity and accelerating innovation across the community. Participants will learn to structure and present their data in ways that maximize accessibility, transparency, and usability.
CPS open data sharing practice: I-24 MOTION testbed demonstration
ABSTRACT. The Tennessee Department of Transportation's I-24 Mobility Technology Interstate Observation Network (MOTION) is a four-mile section of I-24 in the Nashville-Davidson County Metropolitan area with 294 ultra-high definition cameras. Those images are converted into a digital model of how every vehicle behaves with unparalleled detail. This is all done anonymously using Artificial Intelligence (AI) trajectory algorithms developed by Vanderbilt University. Vehicle trajectory data allows us to uncover new insights into how traffic flow influences individual vehicle behavior. This groundbreaking understanding of traffic is more important than ever due to the increasing automation capability of individual vehicles, which are beginning to influence traffic flow through their interactions with conventional vehicles. By unlocking a new understanding of how these vehicles influence traffic, vehicle and infrastructure design can be optimized to reduce traffic concerns in the future to improve safety, air quality, and fuel efficiency.
CPS open data sharing practice: LADDMS demonstration
ABSTRACT. The Leveraging Advanced Data to Deliver Multimodal Safety (LADDMS) initiative represents a new approach to enhancing the safety of our streets, sidewalks, and bike lanes through cutting-edge technology. Our focus is on protecting the most vulnerable road users—pedestrians and cyclists. By deploying LiDAR—light detection and ranging technology—we are able to spot and fix safety issues that may not be caught in traditional reports. This data-driven strategy allows us to pinpoint areas of concern and implement targeted solutions to enhance safety for all. The LADDMS team is excited to provide a data snippet and analytics tutorial for the project.
IoTCloak: Practical Integrity Checks of Machine Learning Inference Code and Models on Tiny IoT Devices
ABSTRACT. IoT devices comprising single-core ARM Cortex-M micro-controllers were earlier used in simple interfacing boards for different sensors. They have come a long way and currently process sensor data using powerful Machine Learning (ML) models instead of offloading the computation tasks to more powerful devices. The ML boom has successfully penetrated these minuscule edge devices, which have started controlling important aspects of our lives (health and fitness, accessibility, financial transactions, etc.) and environment(smart homes, offices, cars, etc.). However, with increased capability comes increased risk of being attacked with malicious intent, especially because of the numerous vulnerabilities of the ubiquitous Bluetooth Low Energy (BLE) links between these IoT devices and the smartphone apps that control them.
This paper presents a comprehensive study to analyze what kind of integrity checks, prevalent in embedded security literature using different hardware security features, are practical for these extremely low-compute devices. The goal is to minimally affect the performance of the ML applications while improving system security by ensuring the integrity of the ML inference code and model. Using recent advances in ARM hardware security features for IoT platforms like cryptographic accelerators, TrustZone, debug watchpoints, etc., we implement and compare across a host of integrity check techniques, using real IoT ML applications on actual hardware platforms.
ACLI-DPFL: Differentially Private Federated Learning with Adaptive Clipping and Local Iteration
ABSTRACT. Federated Learning (FL) is a distributed machine learning method, in which multiple devices or organizations perform training by sharing only models without sending their training data to a central server. To protect privacy, FL methods that apply differential privacy by adding noise have attracted significant attention. However, mitigating the accuracy degradation caused by the noise remains a critical challenge. This paper proposes a novel privacy-preserving FL method that integrates two approaches of adjusting the communication timing between the clients and the central server and regulating the noise. Experiments demonstrated that the proposed method achieves higher model accuracy.
Safeguarding Media Integrity From The Growing Threat of Generative AI
ABSTRACT. This talk will highlight the cutting edge work being done to combat malicious uses of generative AI and deep fake technologies. The talk consists of three sections: (1) machine learning background, (2) diving into generative AI and deep fake technologies, and (3) solutions being developed to combat malicious deep fakes. The solution that will be presented include deep fake detection, zero-trust approaches, and deep fake poisoning.
Invited talk: A Position on Network Management and Control for Dynamic Real-Time Systems
ABSTRACT. The current trend towards convergence of general-purpose andreal-time networks has created several challenges and researchopportunities. This long abstract summarizes the correspondinginvited talk in which we will address such challenges and present atour of a research line, namely the Flexible Time-Triggered para-digm, that focused on providing real-time communication to open,reconfigurable and adaptive systems. This research line ultimatelyled to several efforts based on Software-Defined Networking. Lastlywe raise the question of whether the current Time-Sensitive Net-working family of standards provides a sufficient level of flexibility.
Invited talk: Deterministic AI Processing of Streaming Data on Edge and Embedded Devices
ABSTRACT. Edge and embedded systems are increasingly deploying AI for real-time streaming data processing in domains like healthcare, automotive, radio signal processing and space computing. However, ensuring predictable and reliable performance remains a major challenge, especially under tight timing constraints. This talk examines the timing-predictable, deterministic AI processing, its practical implications, and system design possibilities to address timing guarantees. We will highlight our industrial platform enabling such applications and discuss theoretical insights alongside practical engineering techniques. The talk concludes by outlining emerging research directions at the intersection of AI, real-time systems, and edge computing.
Efficient Inference of parallel partitioned hybrid-Vision Transformers
ABSTRACT. Recent advancements have explored parallel partitioning of Transformers and Convolutional Neural Network (CNN) based models across networks of edge devices to accelerate deep neural network (DNN) inference. However, partitioning strategies for hybrid Vision Transformers-models integrating convolutional and attention layers- remain underdeveloped, particularly in scenarios with low communication data rates. This work introduces a novel partitioning scheme tailored for hybrid Vision Transformers, addressing communication latency through efficient compressed communication and model size reduction. The proposed approach incorporates a trainable quantization and JPEG compression pipeline to minimize overhead. We evaluate our scheme on two state-of-the-art architectures, edgeViT and CoatNet. For a communication data rate of 10 MB/s and partitioning across 12 devices, we achieve up to a 1.74× speed-up and a 5.34× model size reduction for edgeViT-XXS. Similarly, on a customized CoatNet-0, our method achieves a 1.40× speed-up and a 2.66× reduction in model size, demonstrating the efficacy of the approach in real-world scenarios.
Selective Layer Acceleration with Data-parallel Architecture for DNN Inference Throughput Maximization
ABSTRACT. Deep neural networks (DNNs) are widely deployed in CPU-GPU heterogeneous computing systems, where GPU accelerators are often blindly preferred over CPUs. However, accelerating all DNN layers can result in imbalanced CPU and GPU utilizations with suboptimal performance. As a solution, this study presents a selective layer acceleration method with a data-parallel architecture to maximize the DNN inference throughput. Our data-parallel architecture utilizes all the CPU cores while accelerating carefully selected DNN layers. We find an optimal trade-off between the acceleration gains and blocking losses by various layer mappings. Also, analytical bounds for the best- and worst-case response times are derived and validated. Our stepwise optimization effectively reduces the search space, finding near-optimal solutions with minimized optimization times. The implementation of the found optimal mapping shows a 25\% inference throughput improvement over the GPU-only mapping.
To Barrier or to Abstract: That Is the Question in Verifying Stochastic Systems — A Unifying Perspective for Comparative Analysis
ABSTRACT. Formal verification of stochastic systems is a crucial yet challenging requirement in safety-critical applications. For a long time, researchers have faced a fundamental choice between two main approaches: barrier functions and finite abstractions, each with its own advantages and limitations. However, the core differences between these methods have remained unclear.
In this talk, I present a unifying perspective that provides a deeper understanding of both approaches and enables the characterization of their convergence and optimality properties. This framework allows for a systematic examination of the strengths and trade-offs of each method. Building on this foundation, I then introduce piecewise constant stochastic barrier functions, which naturally lend themselves to simple and scalable computational frameworks. I share our recent progress in developing these techniques, along with extensions to control barrier functions, highlighting their promise for verification and synthesis in high-dimensional stochastic systems.
Stein-MAP: A Sequential Variational Inference Framework for Maximum A Posteriori Estimation
ABSTRACT. State estimation poses substantial challenges in robotics, often involving encounters with multimodality in real- world scenarios. To address these challenges, it is essential to calculate Maximum a posteriori (MAP) sequences from joint probability distributions of latent states and observations over time. However, it generally involves a trade-off between approximation errors and computational complexity. In presentation, we discuss a new method for MAP sequence estimation called Stein-MAP, which effectively manages multimodality with fewer approximation errors while significantly reducing computational and memory burdens. Our key contribution lies in the introduction of a sequential variational inference framework designed to handle temporal dependencies among transition states within dynamical system models. The framework integrates Stein’s identity from probability theory and reproducing kernel Hilbert space (RKHS) theory, enabling computationally efficient MAP sequence estimation. As a MAP sequence estimator, Stein-MAP boasts a computational complexity of O(N), where N is the number of particles, in contrast to the O(N^2) complexity of the Viterbi algorithm. The proposed method is empirically validated through real-world experiments focused on range-only (wireless) localization. The results demonstrate a substantial enhancement in state estimation compared to existing methods. A remarkable feature of Stein-MAP is that it can attain improved state estimation with only 40 to 50 particles, as opposed to the 1000 particles that the particle filter or its variants require.
Computation-Aware Algorithmic Design for Cyber-Physical Systems under Uncertainty: Challenges and Opportunities
ABSTRACT. This talk aims to provide a new vision for cyber-physical systems that tightly integrate computation, communication, and control to address challenges posed by uncertainty and limited computing resources. We explore recent advances in modeling, analysis, and design of CPSs that operate efficiently under computational constraints, with applications to autonomous vehicles in ground, air, and maritime domains. Emphasizing interdisciplinary methods—spanning hardware-aware control, real-time systems, and hybrid models—we highlight how adaptive algorithms and self-aware platforms can co-evolve for enhanced performance and resilience. Challenges ahead and opportunities will also be discussed.
Tutorial Session: Safety Begins at System Level Design
ABSTRACT. The system safety goals are becoming harder to achieve when system complexity is increasing. Software defined systems introduce new categories of failures like software inflicted hardware malfunctions that can’t be analyzed using the conventional safety analysis. These failures are often caused by design decisions made early in the design cycle. Therefore, the safety design must become a part of the system specification and modeling phase. This tutorial introduces a software defined system aware design methodology that enables tackling the safety aspects in the system level.
ABSTRACT. The rapid advancements in artificial intelligence are transforming embedded sensing systems, bridging the physical and digital worlds in unprecedented ways. In the domains of health, wellness, and everyday living, embedded AI empowers environments and devices to proactively support both physical and mental well-being, reflecting the increasing demand for systems that are both adaptive and sensitive to human needs. This talk highlights projects from the Columbia Intelligent and Connected Systems Lab (ICSL) that exemplify this vision.
In healthcare, we present a low-cost, vision-based AIoT system for fever screening, designed for high accuracy and affordability. In the wearable space, we introduce a glasses-based platform for biosignal acquisition and emotion recognition, as well as an AR-assisted intelligent stethoscope for intuitive self-health monitoring. For personalized fitness, we showcase a smartphone-based system that estimates key running metrics such as cadence and ground contact time in real time, requiring no additional wearable devices. In intelligent environments, we present a reconfigurable drone platform capable of on-demand task execution through natural language interfaces powered by large language and vision models. In the mental health domain, we introduce CaiTI, a smart home AI therapist that continuously monitors daily routines and delivers personalized psychotherapeutic support, including motivational interviewing and cognitive behavioral therapy.
These projects illustrate how embedded intelligence research at ICSL is shaping the future of health and wellness, redefining the relationship between humans, machines, and their environments.
Invited Talk: LLMs: Your AI Co-Pilot for Aviation Data
ABSTRACT. In the rapidly evolving aviation landscape, the complexity and sheer volume of data pose unprecedented challenges and opportunities.Acubed by Airbus is pioneering an advanced AI Co-Pilot designed specifically to navigate this data-driven era of aviation. Their AI solution seamlessly integrates with existing flight and operational systems, leveraging cutting-edge machine learning and analytics to enhance decision-making, improve operational efficiency, and elevate safety standards. By providing real-time insights, predictive analytics, and intelligent recommendations, the AI Co-Pilot empowers aviation professionals—from pilots and dispatchers to maintenance crews—to harness aviation data proactively and confidently. Embracing AI-driven innovation, Airbus continues to shape smarter skies and redefine the future of flight. Saurabh will provide insights about these AI Co-Pilot technologies for Aviation at Acubed.
ABSTRACT. This work presents a model-integrated approach using WebGME for visual design of launch files within the Robotic Operating System (ROS). A ROS launch file defines nodes to run and the appropriate runtime configuration, allowing quick and easy startup of a complex ROS network. These launch files aid in repeatable management and configuration of multiple nodes in a robotic system. However, manually creating and modifying these XML-based launch files can be complex and error-prone. The contribution of this paper is describing a tool that allows users to raise the level of abstraction when interacting with these launch files. It supports direct construction of launch files by dragging and dropping elements, automatically generating the output XML representation. It also supports importing existing launch files, visualizing node connections, and validating configurations to prevent errors such as duplicate node names and incorrect argument dependencies. Additional plugins facilitate library updates, communication mapping, and automated launch file export. By integrating model validation, connection visualization, and automated code generation, this approach enhances usability and reduces errors in ROS system configuration.
Enabling Analysis and Visualization of Transportation Big Data
ABSTRACT. Transportation studies generate massive amounts of data that are difficult to store, process, query and visualize quickly and easily. Overcoming these challenges are an essential aspect of making the collected data useful to both the original study and other research that could build on the results. We explore the impact of database implementation, specifically IoTDB, on these aspects of data management with respect to transportation on existing datasets.
Panel: Challenges and Opportunities of Automated Formal Assurance
ABSTRACT. This panel will discuss the challenges and opportunities of formal assurance for certification. The panelist will present views from different perspectives of the commercial sector and academia
Energy-Constrained Optimization for Wildfire Detection Using RGB Images
ABSTRACT. Wildfires are an escalating environmental concern, closely linked to power grid infrastructure in two significant ways. High-voltage power lines can inadvertently spark wildfires when they contact vegetation, while wildfires originating elsewhere can damage the power grid, causing severe disruptions. This paper proposes a self-powered cyber-physical system framework with sensing, processing, and communication capabilities to enable early wildfire detection. The proposed framework first analyzes the probability of the presence of a wildfire using lightweight smoke detection models that can be deployed on embedded processors at the edge. Then, it identifies the Pareto-optimal configurations that co-optimize the wildfire detection probability and expected time to detect a wildfire under energy constraints. Experimental evaluations on Jetson Orin Nano and STM Nucleo boards show that the Pareto-optimal solutions achieve wildfire detection within 5–15 minutes while consuming 1.2–3.5× lower energy than transmitting images to the cloud.
ABSTRACT. Energy-harvesting sensors utilize local, ambient energy resources to operate and thus eliminate the need for batteries. A key challenge for such systems is avoiding power failures during application execution. Energy-aware runtimes avoid such failures by reasoning about the task's energy consumption and the current energy available to the system. However, the energy consumption estimates profiled by prior work fail to account for incoming energy, producing incorrect energy consumption estimates which could lead to power failures and missed deadlines. This work analyzes the impact of incoming energy on the profiled energy consumption and argues that future energy-aware runtimes must be mindful of harvested energy when profiling a task's energy consumption.
A Sensing System is More than its Electronics: Towards addressing environmental challenges on outdoor data collection platforms
ABSTRACT. In-situ environmental sensing has driven significant advancements in energy-efficient, accurate, and modular sensing platforms. However, less attention has been given to improving their resilience to harsh outdoor conditions. Electrical components in these platforms are sensitive to heat, moisture, and physical stress, making enclosure design a critical but often overlooked factor in long-term deployment. In this paper, we present a scientific approach to developing a robust, cost-effective enclosure for an open-source outdoor sensing platform. We explore iterative design processes using widely available materials—PLA and PVC—and evaluate their durability, waterproofing, and ease of assembly in both lab and field conditions. By open-sourcing our designs, we aim to highlight the need for greater focus on enclosure robustness as a key challenge in environmental sensing research.
Demo: Greenhouse Sensing using Chirp-Based VLC with a Solar Panel for Energy Harvesting and Data Downlink
ABSTRACT. This demo presents an energy-harvesting IoT system for greenhouses, with a particular focus on showcasing the communication downlink. The downlink is a chirp-based downlink Visible Light Communication (VLC) system that is deployed in and especially designed for the noise environment in the greenhouse. Our chirp-based VLC enhances noise immunity and reduces error rates, making it well-suited for robust VLC communication. We demonstrate that our VLC system operates reliably in high-noise environment where abundant sunlight and various lighting sources are present. Additionally, we show that the communication module can operate using energy-harvesting modules, enabling a sustainable and self-powered system.
Poster: Towards Microbial Fuel Cell Powered Backscatter Tags for Low-Cost In-Ground Soil Moisture Sensing
ABSTRACT. This poster presents Water Radar (WaDAR), a soil moisture sensing system that aims to harness energy from soil microbial fuel cells (SMFCs). Using SMFCs, WaDAR will offer a low-cost, energy-efficient alternative to traditional soil moisture sensors. Lab testing results demonstrate an average soil moisture measurement error of just 2.7%, comparable to more expensive capacitive commercial sensors. This combination of affordability, accuracy and energy efficiency suggests that SMFC energy harvesting could help facilitate the widespread adoption of soil moisture sensors in agriculture, leading to significant water conservation and promoting sustainable farming practices.
Get ready for feature-oriented vehicle SW development and testing
ABSTRACT. The automotive industry is rapidly transitioning towards Software-Defined Vehicles (SDVs), introducing new challenges in software validation, integration, and continuous deployment. Traditional validation methods struggle to keep pace with the agility required for modern vehicle architectures. Integration—not code writing—is the real bottleneck. Many software issues stem from late-stage system-level integration, where testing happens in silos, leading to costly fixes when problems surface too late. Continuous Integration (CI) alone is not enough—automotive teams need Continuous Validation (CV). As vehicles become increasingly software-centric, the complexity of validation grows exponentially:
• Delayed Issue Detection: Engineers often wait weeks or months before discovering integration issues, making fixes expensive and time-consuming.
• Fragmented Testing Environments: Teams rely on physical hardware, limiting accessibility and slowing down validation workflows.
• Lack of Early-Stage Validation: Without a shift-left approach, component validation only happens after full integration, leading to late-stage failures.
• Scalability Issues: The need for massive validation pipelines demands scalable solutions that integrate seamlessly into modern CI/CD workflows.
Deterministic Scheduling for Autonomous Systems: Challenges, and Our Solution - NVIDIA System Task Manager
ABSTRACT. Large-scale deployment of autonomous vehicles (AVs) requires the reconciliation of two competing goals: performance and safety. Perception algorithms are notoriously compute intensive, requiring programmers to exploit parallelism between different hardware engines like CPUs, GPUs, and DLAs. AV systems also need to be deterministic so that their worst-case behavior can be validated for satisfying timing guarantees. On a complex SoC like the Tegra, the behavior exhibited by any software component depends on the state of the SoC as exhibited by two factors: the state of other software components (locks, memory contention, etc.), and the state of the hardware (caches, physical memory, etc.). Dynamic scheduling and preemption of tasks by engine-specific schedulers lead to an explosion in the size of the state space that must be validated and certified for timing determinism. Furthermore, many existing commercial real-time schedulers are CPU-centric and do not provide ordering and timing guarantees across heterogeneous hardware platforms. To solve these problems, we present a static, centrally-monitored, OS-agnostic, non-preemptive scheduler that manages work across hardware engines on Tegra SoCs.
Keynote 3: Addressing the SDV challenges through Cloud native architectures
ABSTRACT. The increased software content and complexity and modeling around shifting left the entire SOP cycle needs adopting modern practices that yielded a great return in other industries, be it Telecom, Aerospace or Enterprise world. The need for modernizing and ability to create monetization opportunities post deployment is critical to boost the profitability challenges the industry is facing. China has shown that by re-using the platforms they can literally bring down the entire timeline from 36 months to 12 months SOP cycle. Let’s discuss how cloud native approach and architecture can help the Auto OEMs to achieve this shift left and turning the current market challenges into opportunities.
This tutorial will provide you with practical tools and frameworks to integrate sociological insights into AI design. You will gain knowledge and skills to develop AI systems that actively support human values, such as fairness, transparency, and inclusivity, while promoting equitable outcomes across various sectors, including urban infrastructure, healthcare, and energy management.
Applied Tutorial – Systems That Make Sense: Aligning CPS/IoT Design with Human Priorities
ABSTRACT. This interactive tutorial uses role-play and collaborative scenario-building to explore how social values and power dynamics shape CPS/IoT design. Rather than focusing on technical tools or algorithms, participants will engage with real-world dilemmas—like biased access to smart infrastructure or conflicting stakeholder priorities in automated systems. Drawing on insights from sociology and ethics, the session invites engineers, designers, and social scientists to step into the shoes of users, policymakers, and marginalized communities. The goal is to surface hidden assumptions, anticipate unintended consequences, and build a shared language for embedding fairness into CPS/IoT systems from the start.
The Ph.D. Forum provides a unique platform for Ph.D. students in their final years to present their research, either through an oral presentation or a poster presentation. Participants will benefit from feedback provided by senior researchers in the field, gaining insights on their presentation skills, Ph.D. research, and even career advice.
Autonomy requires careful reasoning about the surrounding environment in order to safely interact with the world. This tutorial introduces Scenic, an open-source, probabilistic programming language for simulator-agnostic world modeling. Scenic enables the formal specification and generation of reactive, multi-agent scenarios for use in training, testing, and debugging autonomous systems. Widely adopted in both academia and industry, Scenic supports applications ranging from synthetic data generation to sim-to-real transfer. This session will demonstrate how Scenic can assist safe AI-based autonomy development across domains such as robotics, computer vision, and reinforcement learning.
Opening Remarks and Motivation for the Tutorial On the challenges of achieving safe AI-based autonomy and generating and curating data to support the design life cycles of (semi-)autonomous systems.
A Human-Centered Perspective on Optimizing Ambient Assisted Living Sensing Systems for Aging in Place. Andrea Green, Andrea Cuadra, Sarah Billington and Yiwen Dong.
Towards Secure User Interaction in WebXR. Chandrika Mukherjee, Arjun Arunasalam, Habiba Farrukh, Reham Mohamed Aburas and Z. Berkay Celik.
Towards a Lightweight Platform for Human-Robot Interaction in Federated Edge and IoT Environments. Simon Zhang, Zhengxiong Li, Xin Qin and Hailu Xu.
Vision: Preventing Tech-related Physiological Health Issues using Commodity Wearables. Bhawana Chhaglani, Sarmistha Sarna Gomasta and Prashant Shenoy.
mmHvital: A Study on Head-Mounted mmWave Radar for Vital Sign Monitoring. Yang Liu, Fahim Kawsar and Alessandro Montanari.
ExplainGen: a Human-Centered LLM Assistant for Combating Misinformation. Zhicheng Yang, Xinle Jia and Xiaopeng Jiang.
Towards Human-Centric Smart Homes: Modeling Sensor-Actuator Interactions with Deep Learning. Md Abdur Rahman Fahad and Razib Iqbal.
Are We There Yet? A Measurement Study of Efficiency for LLM Applications on Mobile Devices
ABSTRACT. Recent advancements in large language models (LLMs) have prompted interest in deploying these models on mobile devices to enable new applications without relying on cloud connectivity. However, the efficiency constraints of deploying LLMs on resource-limited devices present significant challenges. In this paper, we conduct a comprehensive measurement study to evaluate the efficiency tradeoffs between mobile-based, edge-based, and cloud-based deployments for LLM applications. We implement AutoLife-Lite, a simplified LLM-based application that analyzes smartphone sensor data to infer user location and activity contexts. Our experiments reveal that: (1) Only small-size LLMs (<4B parameters) can run successfully on powerful mobile devices, though they exhibit quality limitations compared to larger models; (2) Model compression is effective in lower the hardware requirement, but may lead to significant performance degradation. (3) The latency to run LLMs on mobile devices with meaningful output is significant (>30 seconds), while cloud services demonstrate better time efficiency (<10 seconds) with a stable internet connection. (4) Edge deployments offer intermediate tradeoffs between latency and model capabilities, with different results on CPU-based and GPU-based settings. These findings provide valuable insights for system designers on the current limitations and future directions for on-device LLM applications.
MixForecast: Mixer-Enhanced Foundation Model for Load Forecasting
ABSTRACT. Short-term Load Forecasting (STLF) for buildings is essential for optimizing energy management and supporting renewable energy integration, but traditional models often struggle with generalization across diverse building profiles. While recent Time Series Foundation Models (TSFMs) show promise, they remain underexplored for STLF. In this paper, we introduce MixForecast, a novel TSFM for universal energy forecasting. MixForecast architecture is based on the TSMixer block ensambling technique to provide accurate and robust load forecasting for smart meter time series. In particular, MixForecast is designed with fewer parameters approx 0.19M than existing TSFM models, making it efficient and suitable for deployment in individual buildings. We trained MixForecast using hourly energy data from 63K buildings and evaluated its performance on a test set of 1,000 commercial and residential buildings around the world. We compared MixForecast against pre-trained TSFMs such as Tiny Time Mixers, Lag-Llama, Moirai, and Chronos, in both zero-shot and fine-tuned settings, as well as traditional models. The model demonstrates superior accuracy and adaptability, excelling in various building profiles. Its lightweight design, combined with strong forecasting performance, establishes MixForecast as a versatile and efficient model for STLF, advancing energy management and promoting sustainable energy practices.
Invited talk: Optimizing IoT Node Design for Edge Computing Applications
ABSTRACT. The demand for deploying image-based AI models on IoT nodes continues to rise, despite the significant computational, energy, and communication constraints inherent to these devices. This article presents a comprehensive overview of state-of-the-art approaches aimed at addressing these limitations, with a particular focus on intelligence partitioning as a heuristic-based partitioning strategy. AI model optimization techniques, including quantization and pruning, are examined in conjunction with node-server partitioning strategies, considering both computational and communication workload perspectives. By exploring these techniques, this study aims to identify critical research gaps and highlight key challenges that must be addressed to enhance the efficiency and scalability of AI-based IoT deployments.
SIM-LDM: Local Dynamic Map Generation Framework using Autonomous Driving Simulator
ABSTRACT. The widespread adoption of autonomous driving systems underscores their potential to solve societal challenges. However, ensuring safety calls for data sharing through infrastructure sensors, including traffic signals, by employing the Local Dynamic Map (LDM). Traditional dependence on physical sensors imposes access restrictions, limiting research flexibility. This paper proposes SIM-LDM, an automated LDM generation approach using AWSIM, an autonomous driving simulator. By replacing real sensor data with simulator-derived data, this approach facilitates the incorporation of dynamic information into a Data Stream Management System (DSMS) for real-time processing. The DSMS, developed on the Confluent Platform, allows data retrieval through HTTP or HTTPS and SQL-based queries, standardizing JSON outputs for broad system compatibility. Performance evaluations reveal data from up to 120 vehicles can be processed within 50 ms intervals on a single PC. Integration of the proposed framework with Autoware, an autonomous driving software, confirms its effectiveness, demonstrating a simulation-driven option for the LDM deployment in autonomous driving research.
Towards a Digital Twin Framework for Secure and Efficient Cyber-Physical Transportation Systems
ABSTRACT. Cyber-Physical Systems (CPS) integrating autonomous vehicles (AVs) for mixed transport of people and goods necessitate robust methodologies to ensure secure, high-performance operations. This paper proposes a Digital Twin (DT) framework for a centralized transportation network, designed to optimize performance and enhance security using feedback-driven mechanisms. Real-time data from AVs is fed into the DT to enable Model Predictive Control (MPC), optimizing efficiency while ensuring compliance with system constraints. Behavioral profiling and pattern recognition are employed to detect misuse and anomalous vehicle behavior, bolstering system security through intrusion detection. The feedback loops compare DT predictions against real-world data to preemptively mitigate threats and operational inefficiencies, offering a flexible approach to securing and improving CPS performance.
Digital Twin and Digital Threat for System Security and Performance applied to a Smart Grid Use Case
ABSTRACT. System security requires a suitable basis in both development and operation. During development, performance tradeoffs lead, in the end, to more or less good security infrastructures that are often not perfect. Hence, during operation, runtime-monitoring and anomaly detection continuously check for security issues. In this paper we show how to link development and operation. We show how information and data, from development to operation, can be aggregated in a digital twin and/or digital thread. This is used as a basis for runtime-monitoring and anomaly detection. We thereby in particular address the trade-off between system security and performance in a concrete use case of a smart grid system.
A Semi-automated Mesh Editing Pipeline for HRTF Simulation
ABSTRACT. Head-Related Transfer Functions (HRTFs) are essential for realistic binaural audio rendering and depend on individual ear and head morphology. Although direct measurement provides accurate HRTFs, it is resource-intensive, making simulation an attractive alternative. However, 3D scanner outputs often require extensive editing to meet the high-quality mesh requirements for simulation. This work presents a semi-automated pipeline that streamlines mesh preparation by non-rigidly registering a template to 3D scans and producing watertight, manifold meshes suitable for acoustic simulation. The most accurate pipeline achieved a maximum reconstruction error of 0.8 mm globally and landmark errors of 3 mm and 2.6 mm for the left and right ears, respectively.
MoCoMR: A Collaborative MR Simulator with Individual Behavior Modeling
ABSTRACT. Studying collaborative behavior in Mixed Reality (MR) often requires extensive, challenging data collection. This paper introduces MoCoMR, a novel simulator designed to address this by generating synthetic yet realistic collaborative MR data. MoCoMR captures individual behavioral modalities such as speaking, gaze, and locomotion during a collaborative image-sorting task with 48 participants to identify distinct behavioral patterns. MoCoMR simulates individual actions and interactions within a virtual space, enabling researchers to investigate the impact of individual behaviors on group dynamics and task performance. Through comprehensive evaluation, we demonstrate that MoCoMR generates simulated behaviors with 70 - 90% similarity to real-world data across key modalities. This simulator facilitates the development of more effective and human-centered MR applications by providing insights into user behavior and interaction patterns. The simulator's API allows for flexible configuration and data analysis, enabling researchers to explore various scenarios and generate valuable insights for optimizing collaborative MR experiences.
DUal-NET: A Transformer-Based U-Net Model for Denoising Bone Conduction Speech
ABSTRACT. We propose “DUal-NET”, a novel transformer-based model for enhancing speech capture through bone con-duction headsets in human-centered sensing systems. As wear-able bone-conduction devices become increasingly important for continuous health monitoring and ambient computing, they face a unique challenge: bone-conduction microphones can receive significant interference from speakers playing audio to the user, this occurs because the headset is directly in contact with the skull and induces vibrations similar to human speech, much like a user speaking, degrading speech recognition accuracy and communication quality. Existing state-of-art speech enhancement and sound source separation methods are ‘blind’ and assume that the interference noise is not available due to the inherent difficulty in observing clean correlated noise. By contrast, headsets have full knowledge of the sounds they play through their speakers, and DUal-NET takes advantage of this raw signal in its denoising process. We demonstrate that DUal-NET can significantly improve standard speech quality metrics over existing state-of-art methods in realistic scenarios (PESQ: 135%, STOI: 50%, LSD: 66%), enabling more accurate speech sensing for human-centered applications including health monitoring, personalized assistants, and augmented communication.
Quantitative Assessment of mmWave Point Cloud for Target Detection
ABSTRACT. This work tackles the challenge of employing quantifiable metrics to assess the quality of point clouds generated by various distinct pipelines using TI IWR6843AOP mmWave FMCW radar. This study focuses on developing quantifiable metrics to evaluate point cloud quality for human targets. The metrics are composed of two parts, coverage and consistency. Coverage tests employ the signed distance function (SDF) to quantify errors between the ground truth human mesh and the point cloud. Additionally, the coverage test evaluates the percentage of points reflected from each body segment. The second focus is on consistency. Point clouds consistency across consecutive frames are assessed by analyzing the standard deviation of mean and maximum intensity values and calculating Hausdorff distances to evaluate the stability of the point cloud distribution.
PluralLLM: Pluralistic Alignment in LLMs via Federated Learning
ABSTRACT. Ensuring Large Language Models (LLMs) align with diverse human preferences while preserving privacy and fairness remains a challenge. Existing methods, such as Reinforcement Learning from Human Feedback (RLHF), rely on centralized data collection, making them computationally expensive and privacy-invasive. We introduce PluralLLM, a federated learning-based approach that enables multiple user groups to collaboratively train a transformer-based preference predictor without sharing sensitive data, which can also serve as a reward model for aligning LLMs. Our method leverages Federated Averaging (FedAvg) to aggregate preference updates efficiently, achieving 46% faster convergence, a 4% improvement in alignment scores, and nearly the same group fairness measure as in centralized training. Evaluated on a Q/A preference alignment task, PluralLLM demonstrates that federated preference learning offers a scalable and privacy-preserving alternative for aligning LLMs with diverse human values.
Good, but Not That Good: An Honestly-Noisy Visualization of Low-Fidelity Data Streams
ABSTRACT. Long-term, ubiquitous sensing on wireless, power-limited devices requires aggressive data-reduction at the source to meet stringent networking and power constraints. However, the naive approach of error-triggered data updates obfuscates data and system information that is useful for downstream tasks. For example, understanding error and stability of outdoor temperature data is useful for those who are deciding what to wear in the morning. We constructed a hypothetical student-led deployment of low-fidelity temperature sensors across a university campus; designed a "noisy sensor" conceptual model to visually communicate the error and stability of the data; and compared our design against the naive baseline of displaying raw data values and a classic data visualization alternative of including a historical average. We then conducted an online survey with 150 participants and found that both the baseline and the classic alternative caused users to over-estimate accuracy of the data and stability of the underlying real-world temperature. Our noisy sensor design corrected these errors, but caused users to report false trends in the data. This study identifies need for continued work in developing task-based visualizations for low-fidelity data streams and in designing sensing systems that support them.
ABSTRACT. We are far from the full adoption of smart home technologies, let alone realizing truly intelligent homes. There are many barriers preventing homeowners from fully adopting smart home technologies, chief among them being their low perceived usefulness (PU) and perceived ease of use (PEoU). Current smart home ecosystems and frameworks only support simple triggers and require homeowners to specify step-by-step logic or purchase new devices to automate new tasks, which severely limits the PU and PEoU of these technologies. We propose DomAIn, a smart home platform that automatically generates, programs, and deploys logic to satisfy a wide range of home-based tasks based on available devices in the home environment, without requiring users to manually “program” any logic. We demonstrate through real deployments and user studies that by incorporating DomAIn, a platform that reduces the need for users to program logic, we can significantly improve a variety of factors that affect PU and PEoU, such as customizability and complexity, by up to 38%, as well as satisfy a diverse range of tasks in home scenarios.
Urban Sensing for Human-Centered Systems: A Modular Edge Framework for Real-Time Interaction
ABSTRACT. Urban environments pose significant challenges to pedestrian safety and mobility. This paper introduces a novel modular sensing framework for developing real-time, multimodal streetscape applications in smart cities. Prior urban sensing systems predominantly rely either on fixed data modalities or centralized data processing, resulting in limited flexibility, high latency, and superficial privacy protections. In contrast, our framework integrates diverse sensing modalities, including cameras, mobile IMU sensors, and wearables into a unified ecosystem leveraging edge-driven distributed analytics. The proposed modular architecture, supported by standardized APIs and message-driven communication, enables hyper-local sensing and scalable development of responsive pedestrian applications. A concrete application demonstrating multimodal pedestrian tracking is developed and evaluated. It is based on the cross-modal inference module, which fuses visual and mobile IMU sensor data to associate detected entities in the camera domain with their corresponding mobile device. We evaluate our framework’s performance in various urban sensing scenarios, demonstrating an online association accuracy of 75% with a latency of $\approx$39 milliseconds. Our results demonstrate significant potential for broader pedestrian safety and mobility scenarios in smart cities.
Human-Centric Wearable Platform for Work Safety Monitoring: Navigating Between Protection and Privacy
ABSTRACT. The adoption of wearable sensor platforms is expanding beyond personal health tracking towards workplace environments. While such systems are commonly used in hazardous work scenarios, continuous monitoring of environmental factors in everyday workplaces remains limited. This paper introduces a human-centered wearable sensor node platform to monitor workplace conditions such as air quality and particulate matter concentration. We present the architecture and real-world evaluation of a platform tailored for non-life-threatening working conditions, with a strong focus on the human worker. We investigate the platform's impact, user perceptions, and potential concerns through a week-long field study and qualitative user interviews. Our findings discuss the platform's technical aspects and highlight challenges related to data management, privacy, and user well-being. This ensures that technological advancements align with worker needs and expectations.
Invited Talk: Predictive Runtime Verification of Learning-Enabled Systems with Conformal Prediction
ABSTRACT. Accelerated by rapid advances in machine learning and AI, there has been tremendous success in the design of learning-enabled autonomous systems in areas such as autonomous driving, intelligent transportation, and robotics. However, these exciting developments are accompanied by new fundamental challenges that arise regarding the safety and reliability of these increasingly complex control systems in which sophisticated algorithms interact with unknown environments. In this talk, I will provide new insights and discuss exciting opportunities to address these challenges.
Imperfect learning algorithms, system unknowns, and uncertain environments require design techniques to rigorously account for uncertainties. I advocate for the use of conformal prediction (CP) — a statistical tool for uncertainty quantification — due to its simplicity, generality, and efficiency as opposed to existing optimization-based neural network verification techniques that are either conservative or not scalable, especially during runtime. I first provide an introduction to CP for the non-expert who is interested in applying CP to address real-world engineering problems. My goal is then to show how we can use CP to solve the problem of predicting failures of learning-enabled systems during their operation. Particularly, we leverage CP and design two predictive runtime verification algorithms (an accurate and an interpretable version) that compute the probability that a high-level system specifications is violated. Finally, we will discuss how we can use robust versions of CP to deal with distribution shifts that arise when the deployed learning-enabled system is different from the system during design time.
Uncertainty Quantification and Data Provenance for Data Pipeline Security Analysis
ABSTRACT. Ensuring data integrity and reliability is essential for real-worldapplications, especially in automated decision-making and anom-aly detection systems. In this study, we introduce a data pipelineaugmentation tool that combines Uncertainty Quantification (UQ)techniques with Data Provenance Tracking to detect anomalies andshifts. By leveraging a task runner for pipeline orchestration, our ap-proach ensures scalable, fault-tolerant execution while maintainingfull traceability and monitoring at each processing stage.To validate our framework, we conduct two experiments usingthe Lawrence Berkeley National Laboratory (LBNL) Fault Detectionand Diagnostics (FDD) datasets, focusing on Fan Coil Unit (FCU)operations in HVAC systems. Our experiments assess the pipeline’sability to detect anomalies under different corruption scenarios: (1)Detecting corruption in a single pipeline stage, (2) Capturing inlinedata corruption.We integrate statistical tests, such as the Kolmogorov-Smirnov(KS) test, to identify distributional shifts between sequential databatches. Additionally, we apply UQ techniques to quantify uncer-tainty, enhancing confidence in detected anomalies. The resultsdemonstrate that our work effectively identifies computationalcorruption, providing a robust and scalable solution for anomalydetection in real-world data pipelines.
GAIA-X4AGEDA: Enabling Data-Driven and Adaptive Vehicle Architectures for the Mobility of the Future
ABSTRACT. The rapid digital transformation in the automotive industry necessitates novel approaches to vehicle software architecture, enabling seamless integration of data-driven applications and dynamic adaptation of vehicle functionalities over their lifecycle. The GAIA-X4AGEDA project, funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK), aims to develop a middleware for Software-Defined Vehicles (SDVs), leveraging GAIA-X principles to establish a secure, interoperable, and cloud-connected ecosystem for intelligent mobility solutions.
Traditional vehicle architectures are rigid and lack the capability to efficiently incorporate new applications and services post-production. This limitation hinders the realization of fully connected and adaptive mobility solutions. The GAIA-X4AGEDA project addresses this challenge by designing a modular and scalable software architecture for vehicles as edge devices, enabling continuous updates, real-time data processing, and seamless cloud integration.
ABSTRACT. Riding in an autonomous vehicle is a new experience for the general public and at Zoox we prioritize safety and comfort for our passengers. While the vehicle can and does handle most situations on the road without human intervention, real world situations such as construction zones arise that require the intervention of remote human operators. Zoox Mission Control, TeleGuidance, and Rider Support stand by ready to assist when necessary. This talk focuses on the middle group, TeleGuidance, that support our skilled TeleOperators. While TeleGuidance interventions are rare, such interventions are necessary to ensure the safety of our passengers and other road users.
This tutorial will provide you with practical tools and frameworks to integrate sociological insights into AI design. You will gain knowledge and skills to develop AI systems that actively support human values, such as fairness, transparency, and inclusivity, while promoting equitable outcomes across various sectors, including urban infrastructure, healthcare, and energy management.
Round Table Discussion – Aligning CPS/IoT with Human Priorities
ABSTRACT. As smart infrastructure becomes embedded in everyday life, questions of equity and power become increasingly urgent. This interdisciplinary panel brings together scholars from sociology, political science, and computer science to explore how CPS/IoT systems concentrate or redistribute power. Panelists will examine how algorithms and sensors shape access, surveillance, and autonomy—especially for historically marginalized communities. Drawing from research on AI governance, housing inequality, and financial systems, the panel will ask: What would an equitable CPS/IoT infrastructure look like? How can democratic values be embedded in technical design? And where do disciplines like sociology and political science fit in shaping the future of intelligent systems?
Autonomy requires careful reasoning about the surrounding environment in order to safely interact with the world. This tutorial introduces Scenic, an open-source, probabilistic programming language for simulator-agnostic world modeling. Scenic enables the formal specification and generation of reactive, multi-agent scenarios for use in training, testing, and debugging autonomous systems. Widely adopted in both academia and industry, Scenic supports applications ranging from synthetic data generation to sim-to-real transfer. This session will demonstrate how Scenic can assist safe AI-based autonomy development across domains such as robotics, computer vision, and reinforcement learning.
AttackLLM: LLM-based Attack Pattern Generation for an Industrial Control System
ABSTRACT. Malicious examples are crucial for evaluating the robustness of machine learning algorithms under attack, particularly in Industrial Control Systems (ICS). However, collecting normal and attack data in ICS environments is challenging due to the scarcity of testbed and the high cost of human expertise. Existing datasets are often limited by the domain expertise of practitioners, making the process costly and inefficient. The lack of comprehensive attack pattern data poses a significant problem for developing robust anomaly detection methods. In this paper, we propose a novel approach that combines data-centric and design-centric methodologies to generate attack patterns using large language models (LLMs). Our results demonstrate that the attack patterns generated by LLMs not only surpass the quality and quantity of those created by human experts but also offer a scalable solution that does not rely on expensive testbeds or pre-existing attack examples. This multiagent based approach presents a promising avenue for enhancing the security and resilience of ICS environments.
FL-DABE-BC: A Privacy-Enhanced, Decentralized Authentication, and Secure Communication for Federated Learning Framework with Decentralized Attribute-Based Encryption and Blockchain for IoT Scenarios
ABSTRACT. In the IoT world, where data privacy and security are paramount, Federated Learning (FL) is a distributed solution for training Machine learning models in the IoT domain that ensures data privacy and security by processing the information locally without transferring sensitive sensor information. This study proposes a novel FL framework for IoT use cases that offers state-of-the-art security tools to solve security and privacy issues. The proposed framework consists of Decentralized Attribute-Based Encryption (DABE) for decentralized authentication and data encryption, Homomorphic Encryption (HE) for safe computation with encrypted data, Secure Multi-Party Computation (SMPC) for privacy-aware collaborative computations, and blockchain for transparent communication, data integrity, and distributed ledger management. Data are encrypted locally on IoT devices with DABE, and initial models are trained on cloud servers using an immutable blockchain network, which supports peer-to-peer authentication. - The new model weights encrypted using HE are transferred to fog layers and aggregated using SMPC. FL server: The FL server cleans up the global model (difference privacy) to avoid data leakage and distributes it to IoT devices for deployment. It solves major problems in secure decentralized learning, privacy-preserving, efficient, and secure FL in IoT applications, and real-time analytics and security in dynamic IoT markets.
Welcome Reception will feature a special musical performance from E-SONIC! E-SONIC is the faculty band in the UCI Samueli School of Engineering comprised of professors from the Departments of Chemical and Biomolecular Engineering, Electrical Engineering and Computer Science, and Biomedical Engineering. The band’s ratio of enthusiasm to musicality is somewhat high, and they cover anything from engineering-themed favorites to crowd-pleasing sing-alongs.