Sparse Intersection Checking for Sparse Polynomial Zonotopes
ABSTRACT. Polynomial zonotopes are a non-convex set representation that have many applications, such as reachability analysis of hybrid systems, control in robotics, and verification of nonlinear systems. Such analysis methods often require intersection checking. The usual algorithm for intersection checking is to, first, overapproximate the polynomial zonotope by a zonotope and then, if the overapproximation is too large, split the set and recursively and repeat. Although the overapproximation error in this algorithm converges to the original polynomial zonotope, there are still two problems. First, the overapproximation error is not monotonically decreasing after each split. Second, more critically, the split polynomial zonotopes do not preserve the sparsity structure as the original polynomial zonotope, eliminating any memory and other efficiency benefits made possible by the sparse structure. In this paper, we propose a variation of polynomial zonotopes called \emph{denormalized polynomial zonotopes}, where each factor variable does not need to be in the range [-1, 1]. We prove this slight modification eliminates the two above issues, while still guaranteeing that overapproximation error will converge to the exact polynomial zonotope. We demonstrate the efficiency improvements through numerical experiments, where in certain cases denormalized polynomial zonotope intersectionchecking is over 400x faster and uses 550x less memory.
Indistinguishability in Localization and Control with Coarse Information
ABSTRACT. We study localization and control problems in which agent dynamics are described by difference or differential equations, while output measurements are collected at discrete times and given by finite-valued maps depending on possibly unknown landmark locations. Guided by the goal of understanding fundamental limitations imposed by such coarse measurements, we focus on characterizing indistinguishable states of the system, i.e., agent-landmark pairs that produce identical observations under all control inputs. We show that indistinguishability relations can be checked automatically under mild assumptions and, being a special type of bisimulation, we develop an iterative algorithm for approximately computing it. We then introduce an analytical approach, rooted in observability theory of linear control systems, which iteratively computes a sequence of subspaces converging in finitely many steps to the indistinguishable subspace; a differential-geometric extension to nonlinear systems is also outlined.
Stationary regimes of piecewise linear dynamical systems with priorities
ABSTRACT. Dynamical systems governed by priority rules appear in the modeling of emergency organizations and road traffic. These systems can be modeled by piecewise linear time-delay dynamics, specifically using Petri nets with priority rules. A central question is to show the existence of stationary regimes (i.e., steady state solutions)---taking the form of invariant half-lines---from which essential performance indicators like the throughput and congestion phases can be derived. Our primary result proves the existence of stationary solutions under structural conditions involving the spectrum of the linear parts within the piecewise linear dynamics. This extends a fundamental theorem due to Kohlberg (1980) from nonexpansive dynamics to a broader class of systems. The proof of our result relies on topological degree theory and the notion of "Blackwell optimality" from the theory of Markov decision processes. Finally, we validate our findings by demonstrating that these structural conditions hold for a wide range of dynamics, especially those stemming from Petri nets with priority rules, and we illustrate on real-world examples from road traffic management and emergency call center operations.
Sharc: Simulator for Hardware Architecture and Real-time Control
ABSTRACT. Tight coupling between computation, communication, and con-trol pervades the design and application of cyber-physical systems(CPSs). Due to the complexity of these systems, advanced designprocedures that account for these tight interconnections are para-mount to ensure the safe and reliable operation of control algo-rithms under computational constraints. In this paper, we presenta tool, called Simulator for Hardware Architecture and Real-timeControl (Sharc), to assist in the co-design of control algorithmsand the computational hardware on which they are run. Sharcsimulates a user-specified control algorithm on a user-specifiedmicroarchitecture, evaluating how computational constraints af-fect the performance of the control algorithm and the safety of thephysical system. We illustrate the power of Sharc by examiningan adaptive cruise control example and a cart-pole example.
IEEE TCRTS Outstanding Technical Achievement and Leadership Award Lecture -- From Systems to Hardware and Back: Reaping the Benefits of Cross-Layer Design
Argus: Multi-View Egocentric Human Mesh Reconstruction Based on Stripped-Down Wearable mmWave Add-on
ABSTRACT. In this paper, we propose Argus, a wearable add-on system based on stripped-down (i.e., compact, lightweight, low-power, limited-capability) mmWave radars. It is the first to achieve egocentric human mesh reconstruction in a multi-view manner. Compared with conventional frontal-view mmWave sensing solutions, it addresses several pain points, such as restricted sensing range, occlusion, and the multipath effect caused by surroundings. To overcome the limited capabilities of the stripped-down mmWave radars (with only one transmit antenna and three receive antennas), we tackle three main challenges and propose a holistic solution, including tailored hardware design, sophisticated signal processing, and a deep neural network optimized for high-dimensional complex point clouds. Extensive evaluation shows that Argus achieves performance comparable to traditional solutions based on high-capability mmWave radars, with an average vertex error of 6.5 cm, solely using stripped-down radars deployed in a multi-view configuration. It presents robustness and practicality across conditions, such as with unseen users and different host devices.
Ultra-High-Frequency Harmony: mmWave Radar and Event Camera Orchestrate Accurate Drone Landing
ABSTRACT. For precise, efficient, and safe drone landings, ground platforms should real-time, accurately locate descending drones and guide them to designated spots. While mmWave sensing combined with cameras improves localization accuracy, the lower sampling frequency of traditional frame cameras compared to mmWave radar creates bottlenecks in system throughput. In this work, we replace the traditional frame camera with event camera, a novel sensor that harmonizes in sampling frequency with mmWave radar within the ground platform setup, and introduce mmE-Loc, a high-precision, low-latency ground localization system designed for drone landings. To fully leverage the temporal consistency and spatial complementarity between these modalities, we propose two innovative modules, consistency-instructed collaborative tracking and graph-informed adaptive joint optimization, for accurate drone measurement extraction and efficient sensor fusion. Extensive real-world experiments in landing scenarios from a leading drone delivery company demonstrate that mmE-Loc outperforms state-of-the-art methods in both localization accuracy and latency.
mmET: mmWave Radar-Based Eye Tracking on Smart Glasses
ABSTRACT. With the growing popularity of VR and AR devices, eye tracking has become a critical user interface and input modality for on-device AI agents. However, a compact, power-efficient, and robust eye tracking solution for AR/smart glasses remains an unsolved challenge. In this paper, we present mmET, the first mmWave radar-based eye tracking system on glasses. Our system, implemented as a pair of prototype glasses, utilizes sub-1cm mmWave radars placed near the eyes. The radars transmit FMCW signals and capture the reflections from the eyes and surrounding skin as the system input. To refine gaze estimation accuracy and data efficiency, we propose several novel methods: (1) concatenating multiple chirps and beamforming with learnable weights to improve resolution, (2) a novel neural network architecture to enhance robustness against remounting, (3) pretraining with contrastive loss to enable fast adaptation for new users. Experiments with 16 participants show that mmET achieves an average angular gaze direction error of 1.49∘ within sessions and 4.47∘ across remounting sessions, and reduces the training data needed for new users by 80% using the pretrained model.
Proteus: Enhanced mmWave Leaf Wetness Detection with Cross-Modality Knowledge Transfer
ABSTRACT. Accurate leaf wetness detection is essential to understanding plant health and growth conditions. The mmWave radar, with its sensitivity to subtle changes, is well-suited for leaf wetness detection. Existing mmWave-based approaches utilize the Synthetic Aperture Radar (SAR) algorithm to generate image-like inputs and rely on multi-modality fusion with an RGB camera to classify leaf wetness. However, the lack of understanding of SAR-based mmWave imaging limits its accuracy in various environments. This paper presents Proteus, a novel way of understanding mmWave SAR imaging. It includes an informative mmWave imaging processing system for accurate leaf wetness detection and duration measurement. We design a noise reduction algorithm to reduce speckle noise and improve image clarity for SAR-based mmWave imaging. Then, we incorporate phase angle data to enrich SAR texture information to capture high-resolution surface details, increasing informative features for precise wetness assessment in complex plant structures. Additionally, we introduce a cross-modality Teacher-Student network, using an RGB-based teacher model to guide the mmWave SAR-based student model for feature extraction. This network transfers the explicit knowledge in the RGB image domain to the mmWave image domain. We use commercial-off-the-shelf mmWave radar to prototype Proteus. The evaluation results show that Proteus achieves up to 96.% accuracy across varied environmental scenarios, outperforming state-of-the-art methods.
Improving mmWave based Hand Hygiene Monitoring through Beam Steering and Combining Techniques
ABSTRACT. We introduce BeaMsteerX (BMX), a novel mmWave hand hygiene gesture recognition technique that improves accuracy in longer ranges (1.5m). BMX steers a mmWave beam towards multiple directions around the subject, generating multiple views of the gesture that are then intelligently combined using deep learning to enhance gesture classification. We evaluated BMX using off-the-shelf mmWave radars and collected a total of 7,200 hand hygiene gesture data from 10 subjects performing a 6-step hand-rubbing procedure, as recommended by the World Health Organization, using sanitizer, at 1.5m—over 5 times longer than in prior works. BMX outperforms state-of-the-art approaches by 31-43% and achieves 91% accuracy at boresight by combining only two beams, demonstrating superior gesture classification in low SNR scenarios. BMX maintained its effectiveness even when the subject was positioned 30 degree away from the boresight, exhibiting a modest 5% drop in accuracy.
Composition of Constraints in Long Duration Autonomy
ABSTRACT. When robots are to be deployed over long time scales, optimality should take a backseat to “survivability”. In other words, it is more important that the robots do not experience mission-ending failures, such as collisions or complete depletion of their energy sources, rather than that they perform certain tasks as effectively as possible. For example, in the context of multi-agent robotics, we have a fairly good understanding of how to design coordinated control strategies for making teams of mobile robots achieve geometric objectives, such as assembling shapes or covering areas. But, what happens when these geometric objectives no longer matter all that much? In this talk, we consider this question of long duration autonomy for teams of robots that are deployed in an environment over a sustained period of time and that can be recruited to perform a number of different tasks in a distributed, safe, and provably correct manner. Survival will be encoded through Boolean composition of barrier functions. But, as these operations are non-differentiable, a non-smooth barrier function machinery is needed to manage the resulting complex, hybrid system dynamics.
ABSTRACT. Batteryless IoT systems face energy constraints exacerbated by checkpointing overhead. Approximate computing offers solutions but demands manual expertise, limiting scalability. This paper presents CheckMate, an automated framework leveraging LLMs for context-aware code approximations. CheckMate integrates validation of LLM-generated approximations to ensure correct execution and employs Bayesian optimization to fine-tune approximation parameters autonomously, eliminating the need for developer input. Tested across six IoT applications, it reduces power cycles by up to 60% with an accuracy loss of just 8%, outperforming semi-automated tools like ACCEPT in speedup and accuracy. CheckMate’s results establish it as a robust, user-friendly tool and a foundational step toward automated approximation frameworks for intermittent computing.
ABSTRACT. Internet-of-Things (IoT) devices are increasingly common in both consumer and industrial settings, often performing safety-critical functions. Although securing these devices is vital, manufacturers typically neglect security issues or address them as an afterthought. This is of particular importance in IoT networks, e.g., in the industrial automation settings.
To this end, network attestation – verifying the software state of all devices in a network – is a promising mitigation approach. However, current network attestation schemes have certain shortcomings: (1) lengthy TOCTOU (Time-Of-Check-Time-Of-Use) vulnerability windows, (2) high latency and resource overhead, and (3) susceptibility to interference from compromised devices. To address these limitations, we construct TRAIN (TOCTOU-Resilient Attestation for IoT Networks), an efficient technique that minimizes TOCTOU windows, ensures constant-time per-device attestation, and maintains resilience even with multiple compromised devices. We demonstrate TRAIN’s viability and evaluate its performance via a fully functional and publicly available prototype.
SecureGaze: Defending Gaze Estimation Against Backdoor Attacks
ABSTRACT. Gaze estimation models are widely used in applications such as driver attention monitoring and human-computer interaction. While many methods for gaze estimation exist, they rely heavily on data-hungry deep learning to achieve high performance. This reliance often forces practitioners to harvest training data from unverified public datasets, outsource model training, or rely on pre-trained models. However, such practices expose gaze estimation models to backdoor attacks. In such attacks, adversaries inject backdoor triggers by poisoning the training data, creating a backdoor vulnerability: the model performs normally with benign inputs, but produces manipulated gaze directions when a specific trigger is present. This compromises the security of many gaze-based applications, such as causing the model to fail in tracking the driver's attention. To date, there is no defense that addresses backdoor attacks on gaze estimation models. In response, we introduce SecureGaze, the first solution designed to protect gaze estimation models from such attacks. Unlike classification models, defending gaze estimation poses unique challenges due to its continuous output space and globally activated backdoor behavior. By identifying distinctive characteristics of backdoored gaze estimation models, we develop a novel and effective approach to reverse-engineer the trigger function for reliable backdoor detection. Extensive evaluations in both digital and physical worlds demonstrate that SecureGaze effectively counters a range of backdoor attacks and outperforms seven state-of-the-art defenses adapted from classification models.
Membership Inference Against Self-supervised IMU Sensing Applications
ABSTRACT. Deep learning has revolutionized the use of inertial measurement unit (IMU) sensors in mobile applications, such as human activity recognition. Building on the success of pre-trained models across various domains, recent studies have increasingly adopted self-supervised learning (SSL) for a range of sensing tasks. While these SSL approaches improve generalization and reduce labeling requirements, their privacy implications have received limited attention. This paper addresses this gap by examining IMU data privacy during pre-training through membership inference. Our work serves two important purposes: First, it enables data owners to verify if their data was used without permission in encoder pre-training. Second, it demonstrates how adversaries might compromise sensitive human sensing data used in pre-training. To enhance the practicality of membership inference on unlabeled IMU sensing data across different SSL algorithms, we introduce an activity labeling module and a novel perturbation strategy to exploit encoder overfitting characteristics on training data. When an encoder overfits, it memorizes training data rather than learning generalizable patterns. Therefore, when comparing the original data to the perturbed version, the encoder generates more distinct feature vectors for samples from its training set than for samples it has never seen before. We evaluate our membership inference methods on two mainstream SSL methods across multiple datasets, demonstrating that our method can achieve relatively high precision and recall at low false positive rates.