| ||||
| ||||
![]() Title:High-Performance AI Inference for Agile Deployment on Space-Qualified Processors: A Performance Benchmarking Study Authors:Pablo Ghiglino, Mandar Harshe, Guillermo Sarabia, Hans Dermot Doran and Carlos Rafael Tordoya Taquichiri Conference:SMC-IT/SCC 2025 Tags:GR740, Intelligent Satellites, On-board AI, Real-time applications, Real-Time Operating Systems and RISC-V Abstract: On-board AI is a cornerstone of modern satellite operations, driving real-time decision-making, enhancing data processing, and boosting autonomy. By minimizing reliance on ground stations, AI on-board accelerates Earth observation insights, enhances fault detection, optimizes resource use, and improves collision avoidance. As space systems become increasingly complex, AI-driven processing is crucial to improving mission efficiency and longevity. In 2022, the authors introduced an innovative AI on-board approach that uses advanced data pipeline techniques, which are recognized for their low power consumption and high throughput in high-performance computing. Building on this foundation, the current work, conducted as part of the PATTERN project in 2024 and supported by Frontgrade Gaisler under a European Space Agency initiative, extends the compatibility of the software to a wider range of space-qualified processors and operating systems. This effort focused on porting Klepsydra AI to Gaisler's Loen4, Leon5 and NOEL-V (RISC-V), as well as Microchip's Polarfire processors, using the RTEMS6 SMP Operating System. During the past year, the team has successfully addressed unique challenges through targeted optimizations. This article discusses the implementation process and presents performance results across these diverse processors. As a result of this effort, the developed solution for AI execution on space-qualified processors is nearly complete, offering two key benefits to the space sector. First, the performance results of the proposed solution are exceptional. Secondly, it simplifies the deployment, significantly reducing the time required to get AI applications running on the target processor. High-Performance AI Inference for Agile Deployment on Space-Qualified Processors: A Performance Benchmarking Study ![]() High-Performance AI Inference for Agile Deployment on Space-Qualified Processors: A Performance Benchmarking Study | ||||
Copyright © 2002 – 2025 EasyChair |