DSD2021: EUROMICRO CONFERENCE ON DIGITAL SYSTEMS DESIGN 2021
PROGRAM FOR THURSDAY, SEPTEMBER 2ND
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 8: KEYNOTE 3 SEAA

Keynote 3 - Prof. Genoveffa Tortora - University of Salerno (Italy) - Are developers and software engineers simple gear of the wheel?

10:00-11:30 Session 9A: FPGA APPLICATIONS

FPGA 2

10:00
Massively parallel binary neural network inference for detecting ships in FPGA systems on the edge

ABSTRACT. This paper presents the development of a ship-detecting edge-processing system for deployment on an aerial FPGA platform. A ship detection chain was developed with imager-specific pre-processing algorithms, massively parallel FPGA neural network inference, and host post-processing procedures. The ship detection binary neural network implemented in combinational logic enables high frame and detection rates, and achieves 98.40% patch classification accuracy. A new algorithm for optimizing a combinational binary neural network circuit is presented that merges multiple neurons in a network layer taking advantage of similarities between neuron weights, which leads to lower logic size and power consumption. Thus, state-of-the-art performance is achieved in comparison to similar previous works using combinational binary neural networks, achieving 38 ns inference latency, 0.425 W of power dissipation, and only 19k FPGA slices.

10:30
Cache-accel: FPGA Accelerated Cache Simulator with Partially Reconfigurable Prefetcher

ABSTRACT. Computer architects need to choose the design configurations which will work effectively across most commonly used workloads. Design space exploration of caches enables the architect to choose the right configuration based on metrics such as hit rates, power, area, and timing. Although the idea of a cache simulator is not new, the hardware/FPGA implementation of such simulators has not been well explored. We implement an FPGA accelerated parameterized two-level cache simulator called Cache-accel which can be partially reconfigured to include prefetching. The key motivation behind the idea is the speed with which the design space exploration can be carried out by exploiting the parallelism available in an FPGA and the accuracy of the results as compared to a software simulator. The hit/miss ratios from such a simulator are compared with other software or model based simulators such as ChampSim and Snipersim for speed and ability to run parallel configurations. Cache-accel reports cache metrics such as hit/miss rates with and without prefetching for multiple cache configurations in parallel, along with the timing, area and power analysis as implemented on the FPGA. We run a set of SPEC 2017 benchmarks on Cache-accel and find that it can run nearly 40x and 6x faster (on an average) as compared to ChampSim and Snipersim to generate hit/miss rates for several parallel configurations.

10:50
FPGA-based real-time monitoring support for CANOpen applications

ABSTRACT. This work deals with CANOpen, a popular high-level protocol in automotive and robotics applications with various levels of criticality, therefore requiring strict reliability and performance guarantees. While software implementations for CAN-based monitoring applications are very flexible, they may face prohibitive overheads in terms of latency and responsiveness. We present a customizable hardware-based CANOpen filter, designed to enable real-time monitoring and anomaly detection, which can be employed in critical systems with stringent response time requirements. As shown in the paper, a customizable FPGA-based filter can bridge the limitations of software solutions by drastically reducing latency -around 10X compared to software- showing that the adoption of FPGA technologies in a critical industrial environment can bring key benefits in terms of real-time features and flexibility.

10:00-11:30 Session 9B: NETWORK ON CHIP

NETWORK ON CHIP

10:00
Fast Simulation of a Many-NPU Network-on-Chip for Microarchitectural Design Space Exploration

ABSTRACT. A viable solution to cope with the ever-increasing computation complexity of deep learning applications is to integrate many neural processing units (NPUs) in a chip where a network-on-chip (NoC) is used as the communication fabric. Since the design space of an NoC is huge, the network topology is first selected based on the communication patterns of applications with a high-level performance estimation method. After the network topology is selected, the microarchitectural design space exploration is performed with a cycle-level NoC simulator. However, the existing NoC simulator is so slow that design space exploration of the microarchitecture is usually conducted manually in a narrow space. Since a synthetic trace is used, the simulation accuracy is also limited. To overcome these weaknesses, we present a simulation technique that is fast and accurate enough for microarchitectural design space of an NoC. In the proposed technique, we use the real communication trace from the many-NPU simulation without NoC consideration. To this end, we define the trace format that defines the interface between a many-NPU simulator and the NoC simulator. To accelerate simulation speed, we propose a parallelization technique at the cluster level in the simulation of the hierarchical NoC. The key technique is to manage the timestamps of events at the cluster boundary to do without time synchronization error. And, we adjust the abstraction level of simulation models to reduce the number of modules in the SystemC NoC simulation. With the proposed technique, we could achieve up to 40 times speed-up for 32 NPU system, compared with the FlexNoC simulator

10:30
Architectural Implementation of a Reconfigurable NoC Design for Multi-Applications

ABSTRACT. With the increasing number of applications running on a Network-on-Chip (NoC) based System-on-Chip (SoC), there is a need for designing a reconfigurable NoC platform to achieve acceptable performance for all the applications. Different methods are presented in the literature for developing a reconfigurable NoC platform by reconfiguring topology, routers, links, and switching techniques. This paper proposes a novel architecture for implementing a reconfiguration logic to the NoC platform executing multiple applications. The proposed architecture reconfigures SoC modules to the routers in the NoC with the help of tri-state buffers based on the applications running. The area overhead in implementing the reconfiguration circuitry is significantly less, approximately 0.9\% of the area consumed by the communication network. The power consumed by the additional logic in achieving reconfigurability is only 1\% of the total power consumed by the router network. The proposed reconfiguration logic and a modified multiplexer-based reconfiguration logic existing in the literature are developed in Verilog HDL. The reconfiguration logic designed is then applied to the NoC router platform and simulated using Vivado Design Suite 2016.2 for functional verification. A hardware-software co-design environment is developed to generate the selection logic for reconfiguration and to implement the design on the FPGA evaluation board. The performance parameters of the architectures are evaluated with Cadence Synthesis Tool, and the results show that the proposed tri-state buffer-based reconfiguration logic consumes an average of 28\% lesser area and 25\% lesser power compared to the modified multiplexer-based reconfiguration logic. Also, compared with the multiplexer-based reconfigurable logic, tri-state-buffer based logic operates at a faster speed.

10:50
Network-on-ReRAM for Scalable Processing-in-Memory Architecture Design

ABSTRACT. The non-volatile metal-oxide resistive random access memory (ReRAM) is an emerging alternative for the current memory technologies. The unique capability of ReRAM to perform analog and digital arithmetic and logic operations has enabled this technology to incorporate both computation and memory capabilities on the same unit. Due to this interesting property, there is a growing trend in recent years to implement emerging data-intensive applications on ReRAM structures. A typical ReRAM-based processing-in-memory architecture may consist tens to hundreds of ReRAM units (mats) that can either store or process data. To support such large-scale ReRAM structure, this paper proposes a scalable network-on-ReRAM architecture. The proposed network employs a novel associative router architecture, designed based on the ReRAM-based content-addressable memories. With the in-memory packet processing capability, this router yields higher throughput and resource utilization levels than a conventional router. This router is technology compatible with ReRAM and as our evaluations show, employing it to build a network-on-ReRAM makes the emerging ReRAM-based processing-in-memory architectures more scalable and performance-efficient.

11:30-13:00 Session 10A: ANOMALIES, SECURITY AND PROTECTION
11:30
Employing the Concept of Multilevel Security to Generate Access Protection Configurations for Automotive On-Board Networks

ABSTRACT. Future automotive on-board networks are expected to integrate various functions on centralized processing platforms. In combination with attack surfaces that originate from the external connectivity of modern vehicles, this tight integration turns the design of secure on-board networks into a challenging endeavor. We present a model to describe both confidentiality and integrity requirements of applications in such a network using security levels. The model utilizes an existing design methodology to synthesize configurations for access protection units of commercially available MPSoCs and ensures that the information flow policies enforced by such configurations are consistent with the specified confidentiality and integrity requirements.

12:00
Protecting IoT Devices through a Hardware-driven Memory Verification

ABSTRACT. Internet of Things (IoT) devices are appearing in all aspects of our digital life. As such, they have become prime targets for attackers and hackers. An adequate protection against attacks is only possible when the confidentiality and integrity of the data and applications of these devices are secured. State-of-the-art solutions mostly address software and network attacks, but overlook physical/hardware attacks. Such attacks can still exploit software vulnerabilities or even introduce them. In this paper, we present embedded memory security (EMS); it protects against physical tampering of the memory of IoT devices. As a case study, we have equipped a RISC-V based system-on-chip (SoC) with an EMS module. Our experimental results show that EMS successfully can protect the SoC against hardware tampering attacks, while having a low performance overhead.

12:30
Comparative Evaluation of Semi-Supervised Anomaly Detection Algorithms on High-Integrity Digital Systems

ABSTRACT. Anomaly detection algorithms solve the problem of identifying unexpected values in data sets. Such algorithms have been classically used for cleaning unlabelled data sets from potentially unwanted values. However, the ability to detect outlying values in data sets can also be used to detect anomalies in systems. Semi-supervised anomaly detection algorithms learn from data for known correct behavior. Such algorithms have been used in various fields, e.g., system security, fault detection, medical applications. In this paper, we use the AUROC score to evaluate algorithms for semi-supervised anomaly detection when applied to high integrity distributed digital systems.

11:30-13:00 Session 10B: MODELING AND SIMULATION

MODELING AND SIMULATION

11:30
Experimental Evaluation of Statistical Model Checking Methods for Probabilistic Timing Analysis of Multiprocessor Systems

ABSTRACT. Timing prediction of complex parallel data flow applications on multiprocessor systems represents a difficult task due to complex interferences observed by running software on platforms shared resources. In this domain, classical analytical or simulation-based approaches demonstrate scalability issues to deliver fast yet accurate predictions. In this work, we present an experimental evaluation of new simulation-based statistical methods for timing analysis of multiprocessor systems. We adopt a measurement-based approach for the creation of probabilistic system-level models of the studied systems. Efficiency of statistical methods is evaluated for platforms with different levels of complexity from the point of view of shared resources. We compare our approach against measurement and traditional simulation methods on two case-studies from the computer vision domain: a Sobel filter and a JPEG decoder. We show that the accuracy and execution time of our simulation approach has good potential for fast yet accurate design space exploration.

12:00
Near-Data-Processing Architectures Performance Estimation and Ranking using Machine Learning Predictors

ABSTRACT. The near-data processing (NDP) paradigm has emerged as a promising solution for the memory wall challenges of future computing architectures. Modern 3D-stacked DRAM systems can be exploited to prevent unnecessary data movement between the main memory and the CPU. To date, no standardized simulation frameworks or benchmarks are available for the systematic evaluation of NDP systems. Identifying which type of high-performance 3D memory is suitable to use in an NDP system remains a challenge. This is mainly due to the fact that understanding the interactions between modern workloads and the memory subsystem is not a trivial task. Each memory type has its advantages and drawbacks. Additionally, memory access patterns vary greatly across applications. As a result, the performance of a given application on a given memory type is difficult to intuitively predict. There is no specific memory type that can effectively provide high performance for all applications. In this work, we propose a machine learning framework that can efficiently decide which NDP system is suitable for an application. The framework relies on performance prediction based on an input set of application characteristics. For each NDP system we are examining, we build a machine learning model that can accurately predict performance of previously unseen applications on this system. Our models are on average 200x faster than architectural simulation. They can accurately predict performance with coefficients of determination ranging between 0.88 and 0.92, and root mean square errors ranging between 0.07 and 0.18.

12:30
Towards Machine Learning Support for Embedded System Tests

ABSTRACT. The correctness of embedded systems needs to be ensured by a high number of tests. Large amounts of data reflecting the system behavior are collected during these test runs. Automated test evaluations are often limited to checking very specific requirements which can hardly cover all possible kinds of erroneous behaviors. Manual examinations can compensate this by implicit knowledge of experienced test engineers but are very time-consuming and therefore costly. This paper shows how machine learning can support the evaluation of embedded system tests. Assessment of new test runs is based on available data from previous tests and aims at identifying those that deviate from usual behavior. Moreover the paper presents a generic approach that helps to find the detection algorithm that is best suited in the given context. A case study proves the effectiveness of this approach. Quantitative comparisons show that our exploration is able to find solutions that outperform state of the art methods.

14:30-15:30 Session 11: KEYNOTE 4 DSD

KEYNOTE 4 - Prof. Dr. Marko Bertogna - University of Modena-ReggioEmilia (Italy) - Next-Generation Embedded Platforms for Robotics

15:30-16:30 Session 12A: Intelligent Transportation Systems (ITS) 1
15:30
Checkpointing Period Optimization of Distributed Fail-Operational Automotive Applications

ABSTRACT. Achieving a cost-efficient fail-operational behavior of safety-critical software is crucial for autonomous systems. However, most applications hold a state such that a checkpoint is required to enable a safe recovery. Here, the challenge is to find the maximum possible checkpointing period while minimizing network and computational overhead. For this purpose, we present an approach to analytically derive the maximum checkpointing period by giving an upper bound on the number of missed computational steps due to failure effects. Worst-case results of our case study using a SLAM application are consistent with our analytically derived exact bound. Overall, by using our approach, a maximum achievable checkpointing period can be determined to reduce network overhead in order to achieve a cost-efficient and safe behavior of autonomous systems.

16:00
MPC-Based Speed Tracking for Automated Urban Buses Performing V2I Communications with Traffic Lights

ABSTRACT. Research on Intelligent Transportation Systems (ITS) is continuously achieving a great degree of maturity for Cooperative and Connected Automated Mobility (CCAM) technology in vehicles. It includes embedded functionalities and the interaction with the environment. Therefore, this interaction should include the traffic lights information. It is a key element for future implementation of CCAM in urban and inter-urban scenarios. In this paper, an MPC controller for an optimal speed behavior of an automated urban bus is presented. The traffic light information through V2I communication is used to achieve safe and comfortable behavior in the real condition demonstrator. Results show a good performance of our approach, validated in the city of Malaga during one month of testing a twelve-meter bus, connected with the infrastructure, in the framework of the CCAM Projects.

15:30-16:30 Session 12B: Applications, Architectures, Methods and Tools for Machine - and Deep Learning (AAMTM) 1
15:30
TRe-Map: Towards Reducing the Overheads of Fault-Aware Retraining of Deep Neural Networks by Merging Fault Map

ABSTRACT. Recently, fault-aware retraining has emerged as a promising approach to improve the error resilience of Deep Neural Networks (DNNs) against manufacturing-induced defects in DNNs accelerators. However, state-of-the-art fault-aware training techniques incur a gigantic retraining overhead due to their per-chip retraining nature for the chip’s unique fault map, which may render it practically infeasible if retraining is done on large datasets. To address this major limitation and to improve the practicability of the fault-aware retraining methodology, this work proposes a novel concept of merging fault maps to effectively retrain a DNN for a group of faulty chips in a single fault-aware retraining round. The merging of fault maps enables to avoid per chip retraining, and thereby reduces the retraining overhead significantly. However, merging of fault maps brings in new challenges such as training divergence (accuracy collapse) if high number of accumulated faults are injected into the network in the first epoch. To address these challenges, we propose a methodology for effective merging of fault maps and then retraining of DNNs. Experimental results show that our methodology offers at least 1.4x retraining speedup on average while improving the error resilience of the network (depending on the DNN models and number of merged fault maps). For example, for the Resnet-32 model using fault map generated from 5 fault maps at the fault rate 6e-3, our methodology offers 2x retraining speedup and 0.6% classification accuracy drop against per-chip retraining.

16:00
POMMEL: Exploring Off-Chip Memory Energy & Power Consumption in Convolutional Neural Network Accelerators

ABSTRACT. Reducing the power and energy consumption of Convolutional Neural Network (CNN) Accelerators is becoming an ever increasingly popular design objective for both cloud and edge-based settings. Aiming towards the design of more efficient accelerator systems, the accelerator architect must understand how different design choices impact the power and energy consumption. The purpose of this work is to enable CNN accelerator designers to explore how design choices affect the memory subsystem in particular, which is a significant contributing component. By considering high-level design parameters of CNN accelerators that affect the memory subsystem, the proposed tool returns power and energy consumption estimates for a range of networks and memory types. This allows for power and energy of the off-chip memory subsystem to be considered earlier within the design process, enabling greater optimisations at the beginning phases. Towards this, the paper introduces POMMEL, an off-chip memory subsystem modelling tool for CNN accelerators, and it's evaluation across a range of accelerators, networks and memory types is performed. Furthermore, using POMMEL, the impact of various state-of-the-art compression and activity reduction schemes on the power and energy consumption of current accelerations is also investigated.

16:30-17:30 Session 13A: Intelligent Transportation Systems (ITS) 2
16:30
Controlled Intra-Platoon Collisions for Emergency Braking in Close-Distance Driving Arrangements

ABSTRACT. The increasing degree of automation and communication makes it possible that vehicles travel at short separations of a few meters, i.e., in a close-distance driving arrangement or platoon. This leads to higher energy/fuel savings and an increased vehicle throughput on roads, among other benefits. Whereas a considerable amount of effort has been dedicated to cruise control in such settings, techniques for emergency braking have been paid less attention. However, this is of paramount importance for a safe operation in such settings and requires special attention. The goal is to reduce the overall stopping distance when braking in an emergency, while keeping a compact platoon, i.e., inter-vehicle separations as short as possible, so as to maximize benefits. This turns out to be challenging, in particular, if vehicles have different braking capabilities, e.g., due to their type and/or loading conditions. In some cases, intra-platoon collisions may even be the only way to avoid major accidents. In this paper, we are concerned with this problem and propose an approach based on engineering controlled intra-platoon collisions. The idea is to reduce potential damage incurred by platoon vehicles, while minimizing the overall stopping distance. We illustrate and evaluate our proposed approach for the case of a two-vehicle arrangement based on detailed simulations.

17:00
Measuring trust in automated driving using amulti-level approach to human factors

ABSTRACT. As the driving is shifting towards automation, the maximization of related benefits would profit from improved user acceptance of the new technology. Studies suggest a strong connection between acceptance and trust in technical solutions. We investigate the improvement of user trust to driving automation through demonstrations that carried on a sophisticated driving simulator. The study correlates subjective data with objective psycho-physiological measurements. The multi-factorial and multivariate analysis of variance investigates the influence of learning effects and pre-experience with ADAS on trust. Results show improvement in trust through user interaction with a human-machine interface of the demonstrated AD system, hence illustrating the relevance of human-centered development processes. The conclusion is supported by the observation of driver cardiac signals.

16:30-17:30 Session 13B: Applications, Architectures, Methods and Tools for Machine - and Deep Learning (AAMTM) 2
16:30
Improving the Efficiency of Transformers for Resource-Constrained Devices

ABSTRACT. Transformers provide promising accuracy and have become popular and used in various domains such as natural language processing and computer vision. However, due to their massive number of model parameters, memory and computation requirements, they are not suitable for resource-constrained low-power devices. Even with high-performance and specialized devices, the memory bandwidth can become a performance-limiting bottleneck. In this paper, we present a performance analysis of state-of-the-art vision transformers on several devices. We propose to reduce the overall memory footprint and memory transfers by clustering the model parameters. We show that by using only 64 clusters to represent model parameters, it is possible to reduce the data transfer from the main memory by more than 4x, achieve up to 22% speedup and 39% energy savings on mobile devices with less than 0.1% accuracy loss.

17:00
Co-designing Intelligent Control of Building HVACs and Microgrids

ABSTRACT. Building loads consume roughly 40% of the energy produced in developed countries, a significant part ofwhich is invested towards building temperature-control infras-tructure. Therein, renewable resources based microgrids offer agreener and cheaper alternative. This communication explorespossible co-design of microgrid power dispatch and buildingHVAC (heating, ventilation and air conditioning system) ac-tuations with the objective of effective temperature controlunder minimized operating cost. For this we attempt controldesigns with various levels of abstractions based on informationavailable about microgrid and HVAC system models using DeepReinforcement Learning (DRL) technique. We provide controlarchitectures that consider model information ranging fromcompletely determined system models to systems with fullyunknown parameter settings and illustrate the advantages ofDRL for the design prescriptions.

17:30-18:30 Session 14A: Architecture and Hardware for Security Applications (AHSA) 3
17:30
Optical Fault Injection Attacks against Radiation-Hard Registers

ABSTRACT. If devices are physically accessible optical fault injection attacks pose a great threat since the data processed as well as the operation flow can be manipulated. Successful physical attacks may lead not only to leakage of secret information such as cryptographic private keys, but can also cause economic damage especially if as a result of such a manipulation a critical infrastructure is successfully attacked. Laser based attacks exploit the sensitivity of CMOS technologies to electromagnetic radiation in the visible or the infrared spectrum. It can be expected that radiation-hard designs, specially crafted for space applications, are more robust not only against high-energy particles and short electromagnetic waves but also against optical fault injection attacks. In this work we investigated the sensitivity of radiation-hard JICG shift registers to optical fault injection attacks. In our experiments, we were able to repeatable trigger bit-set and bit-reset operations changing the data stored in single JICG flip-flops despite their high-radiation fault tolerance.

17:50
Towards a More Flexible IoT SAFE Implementation

ABSTRACT. The Internet of Things (IoT) is disseminating in everyone’s daily life and gets ubiquitous not only in industry. With this growth, device and communications security is increasingly important. Hardware Security Modules (HSMs) are integrated into IoT devices to provide a “Root of Trust”, and protect confidential key material used for device authentication. Due to lack of standardized interfaces, HSM manufacturers implement their own proprietary interfaces. To ease integration of hardware security, and enable vendor interoperability, the GSMA proposes IoT SAFE, a standardized interface.

In this work, IoT SAFE is evaluated and compared against the interfaces of proprietary HSMs. Improvements are proposed in order to reduce complexity, increase flexibility, and ease the integration into Transport Layer Security (TLS) libraries. The evaluation shows that the TLS handshake performance can be improved significantly for ECC and RSA certificate-based client authentication. The message count between HSM and hosting device is reduced by approximately 40% and 25%, respectively.

18:10
5G Security: FPGA Implementation of SNOW-V Stream Cipher

ABSTRACT. In this paper, a very compact architecture the newest member of the SNOW family of stream ciphers, called SNOW-V, is presented. The proposed architecture has a 128-bit datapath and is pipelined in key areas in order to achieve the maximum possible frequency while using only a small number of hardware resources. The design was coded using the VERILOG hardware description language and the BASYS3 board (Artix 7 XC7A35T) was the target of the hardware implementation. The proposed implementation utilizes only 2109 FPGA LUTs and 1352 FFs and reaches a data throughput of 2606 Mbps at 224 MHz clock frequency. To our knowledge, this is the first FPGA implementation of the SNOW-V stream cipher.

18:30
Extending Circuit Design Flow for Early Assessment of Fault Attack Vulnerabilities

ABSTRACT. Modern application-specific integrated circuits (ASICs) are increasingly employed in domains where they must fulfill security requirements. Traditional ASIC design flows include numerous steps to ensure the correctness of a circuit and its freedom of manufacturing defects, but they do not cover security vulnerabilities. In this paper, we demonstrate how to leverage state-of-the-art electronic design automation (EDA) tools to validate the resistance of a circuit against physical attacks in early design steps (before fabrication). While the approach is generic, we demonstrate it on a specific physical attack vector: Fault Sensitivity Analysis (FSA). We show how existing tools (especially for logic and timing simulation) can be extended by custom scripts to assess the vulnerability of an implementation to such attacks.

17:30-19:00 Session 14B: Advanced Systems in Healthcare, Wellness and Personal Assistance (ASHWPA)
17:30
Model-based System Architecture for Event-triggered Wireless Control of Bio-analytical Devices

ABSTRACT. Bio-analytical devices have gained importance over the past few years because of their application in rapid diagnostics and biochemical analysis. The integration of the Cyber-Physical System (CPS) concept with bio-analytical devices is essential to enable automation of the device. Modeling of CPS-based bio-analytical devices can provide a deeper understanding of system behavior at early design stages and avoid costly iterations. In this paper, a model-based system architecture enabling wireless control of bio-analytical devices is proposed using the extended timed automata-based formal technique. Using this formal technique, a study case “A droplet flow cytometer for antibiotic susceptibility testing of bacteria" is modeled and verified using the UPPAAL tool. The synchronized operation of the modeled system under defined constraints was confirmed. In addition, the case study shows the implications of formal techniques for the design and verification of wireless automation of high-throughput laboratory setups in Model-Based System Engineering (MBSE).

18:00
Modeling Battery SoC Predictions for Smart Connected Glasses Simulations

ABSTRACT. In this paper we propose an analytical Lithium Polymer (LiPo) rechargeable battery modeling approach to accurately predict battery SoC during simulations. This approach is based on a data-driven method for modeling the aging phenomenon present in rechargeable batteries. The proposed battery model is integrated into a system-level modeling methodology intended to simulate the discharge process of smart connected glasses powered by a 95mAh LiPo battery. Obtained results show that the proposed analytical battery modeling approach helps predicting battery SoC with less than 3% error. Moreover, this battery model performs fast predictions at high resolution, thus allowing to simulate in a few minutes, smart glasses scenarios that can last for days.

18:30
Oxygen Saturation Measurement using Hyperspectral Imaging targeting Real-Time Monitoring

ABSTRACT. Oxygen saturation (StO2) measurement allows to detect different clinical conditions related with the low oxygenation of tissues or is used to monitor the quality and safety of organ transplantation. This study is focused on the visualization and measurement of StO2 using hyperspectral imaging (HSI) through non-contact skin captures, targeting a potential real-time monitoring application. A customized acquisition system composed by a hyperspectral camera (covering the 470-900 nm spectral range) and a thermal camera was developed to capture images of hands in a non-contact fashion. An experimental procedure was established to measure the evolution of StO2 in healthy hands where a compression of the index finger or brachial artery were performed. StO2 measurements were performed in normal, compression, and reperfusion states. Two mathematical models with different sets of wavelengths were evaluated. The results show that the proposed models, which employed two wavelengths (660 and 880 nm), obtain reliable StO2 values, providing a potential non-contact imaging tool for StO2 measurement.