Perspectives on The Future of Space Computing for Military Space Missions
ABSTRACT. The human space enterprise has experienced significant changes over the past decade, ranging from the explosive growth of new commercial space launch and on-orbit services and capabilities, to the global realization of the advantages that space provides in military conflict and the impacts that the loss space capabilities would have to countries that depend on them. Independently, there has been unprecedented growth in a number of technologies, that are either driving our terrestrial computational tech base (e.g., machine learning) or have the potential to upend it (e.g., quantum computation/quantum information science). This talk will discuss some of the current forces and trends that are driving the requirements for space processing from a military perspective, and will touch on several current space computing projects under development at AFRL addressing these issues.
Modeling Data of Planetary Instrument for X-ray Lithochemistry (PIXL) for Mars 2020
ABSTRACT. NASA's Mars 2020 Mission is to study Mars' habitability and seek signs of past microbial life. The mission uses an X-ray fluorescence spectrometer to identify chemical elements at sub-millimeter scales of the Mars surface. The instrument captures high spatial resolution observations comprised of several thousand individual measured points by raster-scanning an area of the rock surface. This paper will show how different methods, including linear regression, k-means clustering, image segmentation, similarity functions, and euclidean distances, perform when analyzing datasets provided by the X-ray fluorescence spectrometer to assist scientists in understanding the distribution and abundance variations of chemical elements making up the scanned surface. We also created an interactive map to correlate the x-ray spectrum data with a visual image acquired by an RBG camera.
MUSTANG: A Workhorse for NASA Spaceflight Avionics
ABSTRACT. The Modular Unified Space Technology Avionics for Next Generation (MUSTANG) is a small integrated Avionics system including Command and Data Handling (C&DH), Power System Electronics (PSE), Attitude Control System Interfaces (ACS), and Propulsion Electronics. The MUSTANG Avionics Architecture is built upon many years of knowledge capture and lessons learned at the Goddard Space Flight Center. With a motivation towards modularity and keeping board redesign costs to a minimum, MUSTANG offers flexibility in features with a backplane-less design and allows the user to choose the options (cards) needed for their system. It incorporates a distributed power system that provides secondary power to all its subcomponents reducing the number of primary services needed for an Avionics. MUSTANG can be integrated into one system or divided into several smaller components. MUSTANG supports redundancy and cross-strap ability for a more robust and reliable Avionics system. A variation of MUSTANG exists for Instrument Electronics called iMUSTANG and allows the user to select functionality applicable to the instrument electronics. MUSTANG is not meant to replace Avionics for all spacecraft. There are limitations due to its relatively compact size, but the MUSTANG design has proven broadly applicable on many spacecraft and instrument bus avionics architectures.
A data pre-processing module for improved-accuracy Machine-Learning-based micro-Single-Event-Latchup detection
ABSTRACT. Single-event-latchup (SEL) in a semiconductor device is an undesirably induced high current state, typically rendering the affected device to be non-functional and compromising its operating lifetime. The lower-current SEL phenomenon – the micro-SEL – is often difficult to detect, particularly when the normal operating current of the protected device is variable and the magnitude of micro-SEL currents is different under different operating conditions. In Machine-Learning (ML), the said variable current inadvertently affects the multiple features of the input current profile required for micro-SEL detection, thereby severely reducing the detection accuracy.
In this paper, we propose a data pre-processing module to improve the accuracy of the ML-based micro-SEL detection under the aforesaid current conditions. The proposed pre-processing module encompasses the following. Prior to classification by ML, the input current profile is processed by a data pre-processing module employing a proposed background subtraction algorithm and proposed adaptive normalization algorithm. By filtering the irrelevant base current and normalizing the micro-SEL current based on the base current value, the data pre-processing module provides improved accurate features of the input current profile and widens the difference between normal samples and micro-SEL samples in the feature space. Ultimately, the proposed module facilitates ML algorithms to generate a more accurate decision boundary. The outcome is a worthy ~13% accuracy improvement (from ~79% to ~92%) in the micro-SEL detection in a device operating with variable currents.
Finding Pragmatic Steps to Building Dependable Systems with Open Source
ABSTRACT. By looking at the press headlines, we've learned that open source is already being used in space applications that have safety considerations today. Details about the safety analysis performed are behind NDAs and are not available to developers in the open source projects being used. To make the challenge even more interesting, the processes the safety standards are expecting are behind paywalls, and not readily accessible to the wider open source community maintainers and developers. Figuring out pragmatic steps to adopt in open source projects, like the Linux kernel, requires the safety assessor communities, the product creators, and open source developers to communicate openly. There are some tasks that can be done today that help, like knowing exactly what source is being included in a system and how it was configured and built. Automatic creation of accurate Software Bill of Materials (SBOMs), is one pragmatic step that has emerged as a best practice for security and safety analysis. There are also other different practices that various open source projects are adopting that can help with the safety analysis. This talk will overview some of the methods being applied in different open source projects, as we try to establish other pragmatic steps that will help to solve this challenge.
ABSTRACT. As NASA exploration moves beyond low-Earth-orbit (LEO), the need for interoperable avionics systems becomes more important due to the cost, complexity, and the need to maintain distant systems for long periods.
The existing SpaceVPX industry standard addresses some of the needs of the space avionics community, but falls short of an interoperability standard that would enable reuse and common sparing on long duration missions and reduce NRE for missions in general.
A NASA Engineering & Safety Center (NESC) study was conducted to address the deficiencies in the SpaceVPX standard for NASA missions and define the recommended use of the SpaceVPX standard within NASA. Subsequently, the broader spaceflight avionics community has been engaged to work towards a more interoperable variant of the SpaceVPX standard. This presentation will provide a background on SpaceVPX interoperability, proposed solutions, and an update on efforts to develop a variant of the standard.
Time Sensitive Networking (TSN) Brings Resilient and Low Latency Ethernet to Space
ABSTRACT. TSN is a set of IEEE 802.1 standards and technologies that bring bounded latency, low packet delay variation and guaranteed packet delivery to conventional Ethernet networks. While TSN has been deployed in residential, automotive and telecommunication Ethernet networks, its applicability to space applications is now being studied by the IEEE P802.1DP (TSN for Aerospace Onboard Ethernet Communications) Task Group.
This presentation provides an overview of how TSN can bring fault tolerance, resiliency, latency reduction and determinism to space, and how TSN Ethernet can change the economics and ecosystem for space. The author will also give a status update on the work at P802.1DP.
Sponsored Talk (AlphaData): FPGA based Adaptive Computers for Space
ABSTRACT. Computing systems for the space market have historically been highly optimised for their intended application with little priority given to commonality or reuse. A modular approach based on open standards and Commercial Off-The-Shelf (COTS) products can increase flexibility and reuse, whilst reducing total costs and timescales. Alpha Data presents examples of how this has been achieved using AMD FPGAs, and highlights off-the-shelf solutions using Adaptive Systems on Chips for the next generation of computing systems in space.
Early Design Exploration of Space System Scenarios Using Assume-Guarantee Contracts
ABSTRACT. We present a compositional approach to modeling and analyzing space mission operation sequences with steps across multiple viewpoints. We consider different tasks such as communication, science observation, trajectory correction, and battery charging; and separate their interactions across discipline viewpoints. In each sequence step, these tasks are modeled as assume-guarantee contracts. They make assumptions on the initial state of a step, and if these assumptions are satisfied, they guarantee desirable properties of the state at the end of the step. These models are then used in Pacti, a tool for reasoning about contracts. We demonstrate a design methodology leveraging Pacti's operations: contract composition for computing the contract for the end-to-end sequence of steps and contract merging for combining them across viewpoints. We also demonstrate applying Pacti's optimization techniques to analyze the figures of merit of admissible sequences satisfying operational requirements for a CubeSat-sized spacecraft performing a small-body asteroid rendez-vous mission. We show that analyzing tens of thousands of combinations of sequences and operational requirements takes just over one minute, confirming the scalability of the approach. The methodology presented in this paper supports the early design phases, including requirement engineering and task modeling.
A Hybrid Space Architecture for Robust and Resilient Satellite Services
ABSTRACT. A `hybrid space architecture' has been proposed to facilitate robust and resilient satellite data downlink, integration and analysis; however, the technical details for what may comprise a hybrid space architecture are severely lacking. Thus far, `hybrid' principally entails the diversity of commercial providers. While diverse suppliers can contribute to hybrid space architectures, we argue that robustness and resilience will only be achieved through heterogeneous network and asset architectures. A connected satellite services ecosystem composed of the union of different networks with different characteristics would limit single points of failure, thereby generating high levels of redundancy, resilience and scalability. This research outlines parameters of a hybrid space architecture, documents satellite service reference architectures and provides a comparative analysis of the features for each architecture. Further, through a case study of existing satellite service providers, we propose how a hybrid space architecture could be piloted in Northern Europe and the High North.
Availability Vs. Lifetime Trade-Space In Spacecraft Computers
ABSTRACT. This paper presents a trade space comparing forty-eight different configurations for redundant computer systems in spacecraft, in terms of their availability versus lifetime metrics. Each of these configurations uses a different redundancy scheme. Failure modes include transient failures due to radiation effects in space, such as Single Event Upsets (SEUs) and Single Event Functional Interrupts (SEFIs) and permanent failures due to degradation. Configurations include various combinations of up to four total processors, with at least one of them being a prime, and the others, hot or cold spares. Dual, Triple, and Quad Modular Redundancy are covered, along with some deployed spacecraft configurations, e.g. the Parker Solar Probe, and the Curiosity rover during its Entry, Descent, Landing (EDL) phase. Some hypothetical designs lie outside the convex hull of previously-known configurations.
ABSTRACT. AdvoCATE (Assurance Case Automation Toolset) is a tool that supports the development and management of assurance cases. An assurance case is a comprehensive, defensible, and valid justification that a system will function as intended for the specific mission and operating environment. Dynamic Assurance Case (DAC) is an assurance case that combines both the static and dynamic elements for assuring validity of the captured justifications. AdvoCATE supports a range of notations and modeling formalisms, including Goal Structuring Notation (GSN) to document safety cases and Bow-Tie Diagrams (BTDs) for risk modelling. AdvoCATE implements an assurance metamodel that allows all of the artifacts relevant from the safety assurance perspective to be explicitly defined and their relations captured. Some of the artifacts can be created directly in AdvoCATE (e.g., hazard log, safety arguments, safety architecture), while other artifacts such as formal verification results, can be imported into the tool so that the evidence can be collectively viewed. AdvoCATE enables creation of different dynamic views of the assurance case where it can receive and evaluate external data to highlight the current state of the captured assurance case justifications. In this talk, we give an overview of the current capabilities and future developments in supporting DACs with AdvoCATE.
Trustworthy Autonomy for Gateway Vehicle System Manager
ABSTRACT. The Vehicle System Manager (VSM) is the highest-level software control system in the Gateway hierarchical Autonomous System Management Architecture (ASMA). A key objective in ASMA design is to focus on infrastructure and systems to allow autonomous operations aboard Gateway. VSM will integrate modules and visiting vehicles to assist ground controllers and onboard crew in operation of Gateway as the head of a distributed and hierarchical system.
The VSM provides four function categories: Mission Management and Timeline Execution, Resource Management, Fault Management, Vehicle Control and Operation. VSM provides various levels of automation ranging from fully autonomous operations with no flight crew and minimal ground monitoring to advisory automation when Gateway is crewed and has full ground monitoring. Trustworthiness is achieved via verified specification, comprehensive development verification, and real-time verification using assume-guarantee contracts (AGCs).
VSM is heavily data-driven. Development verification includes semantic verification of the data model via peer review and testing. Development AGCs are implemented using the PlusCal/TLA+ environment to model key state machines with the AGCs implemented as assertions and linear temporal logic formulas, checked using the TLC model checker.
VSM uses runtime AGCs, implemented in R2U2 using assertions in propositional logic and guarantees in mission-time linear temporal logic (MLTL). R2U2 was selected because it permits formulas to be written in a mathematically concise, unambiguous notation. Additionally R2U2 is optimized for speed and size and the inferencing engine has been proven correct with respect to the operator space. R2U2 is integrated in VSM via a runtime monitor that feeds the necessary telemetry data to R2U2 and which receives and responds to the R2U2 verdict stream.
The full lifecycle verification approach and use of AGCs provides increased trustworthiness to VSM. Preliminary results provide encouragement that VSM can be both autonomous and trustworthy.
Robots capable of human-level manipulation are key to industrial space viability and the commercial space economy
ABSTRACT. Robots capable of human-level manipulation are key to industrial space viability and the commercial space economy. Beyond building space-rated robot hardware, advanced software capabilities including artificial intelligence are critical to enabling practical, everyday robotics usage. In this talk Dr. Coleman will propose an approach for mostly autonomous control software, with minimum human supervision. A key to this approach is designing multi-purpose robots that can adapt to a variety of IVA, EVA, and lunar manipulation needs for dynamically changing environments.
Dr. Dave Coleman is CEO of PickNik Robotics and an industry thought leader in robotics. PickNik has been successfully delivering robotics innovation on and off earth over the past 7 years to over 60 customers. Before founding PickNik, Dave worked at Google Robotics, Open Robotics, and Willow Garage. Dave is an international advocate of open source software, robotic interoperability, and an expert in autonomous motion control. He has been collaborating with NASA on various robotic programs and SBIRs since 2014.
Sponsored Talk (Teledyne): Teledyne e2v Edge Processing Solutions : Manufacturing Flow, Radiation Testing strategies and Space Use cases
ABSTRACT. This paper introduces the Space Radiation Tolerant Edge Processing solutions from Teledyne e2v : Multicore Processors, high speed DDR4 Memories, and integrated computing modules. An introduction of Teledyne e2v Space manufacturing flow will be presented, prior to dive into the radiation testing and mitigation strategy. Finally, some specific examples of use cases in Space will be proposed.
Computer Human Interface Challenges in Space Exploration
ABSTRACT. NASA’s plans to return humans to the Lunar surface require overcoming a variety of challenging technical and operational obstacles. In 2022, NASA formed the Extravehicular Activity (EVA) and Human Surface Mobility (HSM) Program (EHP) at the Johnson Space Center with responsibilities including development of space suits and surface mobility systems for Lunar missions. This program includes a Technology Development and Partnerships office chartered to identify high priority gaps in capabilities for Lunar surface mobility and to coordinate resources to close those gaps. This presentation details the EHP technology roadmap for “Informatics and Decision Support,” a subset of spacecraft avionics focused on effective and autonomous crew interaction with spaceflight systems. The gaps, grouped into displays, audio systems, and information technology infrastructure, are largely driven by the unique interaction requirements for human spacecraft and the severe radiation environments beyond low earth orbit. The roadmap identifies ongoing activities and paths to technology infusion into Lunar spacecraft. NASA is seeking input on the content and ideas for alternative paths to gap closure. Closing these gaps is important to successful human operations on the Lunar surface and vital to NASA’s long-term goal of human missions to Mars.
SEFI Mitigation Middleware Radiation Test Results for NASA and Other GPU Applications
ABSTRACT. Emerging space mission requirements for complex autonomous operations, high-end sensor processing, swarm/constellation management, planetary/lunar landing, and autonomous exploration missions, etc. require processing requirements that rad-hard processors cannot achieve and fault tolerance requirements that COTS processors alone cannot meet. Troxel Aerospace’s SEFI/SEU Mitigation Middleware (SMM) provides a path to enabling these types of missions by allowing high-performance COTS processors to operate with a high degree of fault tolerance without the need for rad-hard electronics, and thus simultaneously meet advanced mission criticality and performance requirements. Troxel Aerospace has demonstrated operate-through capability through extensive radiation testing on multicore processors and GPUs, most recently for several critical missions including astronaut displays for NASA Johnson and a few DoD missions. This presentation will highlight SMMs history, features, recent radiation test results, and upcoming missions.
Towards a common robotics simulator for lunar environments?
ABSTRACT. High fidelity simulated lunar environments play a key role in making lunar missions successful. From landing on moon’s surface to exploring it, simulators allow to plan and refine software and hardware components. Yet, as of today, high-quality lunar simulators are either proprietary or expensive closed source solutions. Worse, they only incorporate a subset of the features required to simulate complete lunar missions. This talk aims at engaging with the audience to try answering the following question: how, as a community, could we build a high-fidelity open-source simulator for lunar missions?
Establishing Trust in NASA’s Artemis Campaign Computer-Human Interface (CHI) Implementation
ABSTRACT. The NASA Artemis campaign will return humans to the Moon. This time, with the help of commercial and international partners, the campaign's objective is a permanent Moon base. The Moon base infrastructure, including an orbiting station and surface assets, will be developed for astronauts to stay for the long haul to learn to live and work on another planet in preparation for an eventual Humans-to-Mars mission. As the roundtrip communication delays increase in deep space exploration, the crew will need more onboard systems autonomy and functionality to maintain and control the vehicle or habitat. These mission constraints will change the current Earth-based spacecraft to ground control support approach that will demand safer, more efficient, and more effective Computer-Human Interface (CHI) control. For Artemis, CHI is defined as the elements that the crew interfaces with: audio, imagery, lighting, displays, and crew controls subsystems. Understanding how CHI will need to evolve to support deep space missions will be critical for the Artemis campaign --especially crew controls, which is the focus of this paper. How does NASA ensure crew controls are reliable enough to control complex systems and prevent a catastrophic event due to human error--especially when the astronauts could be physiologically and/or psychologically impaired? NASA's approach to mitigating catastrophic hazards in human spaceflight system development such as crew controls is through a holistic system engineering and Human System Integration (HSI) methodology. This approach focuses on incorporating NASA's Human-Rating Requirements to ensure consideration of human performance characteristics to control and safely recover the crew from hazardous situations. This paper discusses, at a high level, CHI for the Artemis campaign. Next, a discussion of what it means to human-rate a space system crew controls and how trust in CHI begins with the NASA human rating requirements. Finally, a discussion on how systems engineering and the HSI process ensure that crew control implementation incorporates the NASA human-rating requirements.
The Callisto Technology Demonstration: COTS Crew Interfaces aboard Artemis I
ABSTRACT. The Callisto Technology Demonstration Payload was installed inside the cabin of the Artemis I Orion spacecraft to enjoy a perfect string of 21 successful operational sessions throughout the spectacular 25.5 day lunar mission. A joint partnership between Lockheed Martin, Amazon Alexa, and Webex by Cisco, this demonstration explored multiple unique crew interfaces that may potentially improve a crew’s operational efficiency and quality of life aboard future exploration missions. Augmented by payload-provided cabin lights, cameras, and an intercom, Callisto’s primary intent was realized by implementing crew displays and Cisco’s Webex collaboration technology on an iPad, as well as a ‘local voice control’ version of Amazon’s Alexa on a COTS single-board computer. Via the first-ever integration between a secondary payload and Orion’s flight software, Callisto made available Orion’s extensive telemetry to both the crew displays and Alexa. With no crew aboard Artemis I to test Callisto, the system was instead evaluated by “virtual crew members” (VCM) invited to the payload operations suite in the Johnson Space Center Mission Control Center, thus meeting Callisto’s secondary objective to engage the public throughout the mission.
Having operated flawlessly in every engineering respect, and having been met with universal enthusiasm by every VCM, Callisto not only met both of its main objectives, but lent further weight to our ever-growing appreciation for just how far it may be possible to push smartly implemented COTS hardware into the challenging realm of space. This presentation provides an overview of Callisto's development as a platform at least partially intended to further explore such COTS possibilities in space, and its subsequent operational success during its trip around the moon.
An Examination of the Radiation Tolerance of Electronic Displays For Future Crewed Missions
ABSTRACT. Recent commitments to return humans to the lunar surface and long duration crewed missions beyond the protection of the Earth’s atmosphere and magnetosphere require examination of technologies and challenges that are unique to human inhabitants. Electronic displays serve as a critical, real time informational interchange between crew and the plethora of support technologies that contribute to a successful crewed mission (e.g. scientific instrumentation, safety and health monitors, computer interface, etc.). Critical components utilized in space-based applications must reliably operate through a variety of hostile environments such as the particle radiation environment comprised of galactic cosmic rays, trapped particle belts, and solar particle emissions. These highly energetic particles interact with materials at the atomic level, temporarily distorting free charge carrier populations and modifying intrinsic material parameters that in-turn impact the performance of devices built upon that material. Present-day utilization of electronic displays on the International Space Station and space tourism are confined to well-shielded spacecraft in low earth orbit altitudes with non-polar orbits which results in significantly attenuated energetic particle populations and radiation dose seen by on-board components. In contrast, crewed missions to the lunar surface will subject electronic displays to a particle radiation environment without geomagnetic shielding and in some cases with little to no shielding at all, (e.g. displays on an unpressurized lunar rover, surface-based instrumentation, etc.). Given the realities of the small quantities required for space-based applications compared to the broader market, it is critical to evaluate potential design sensitivities while also understanding the tunable parameters (if any) within the electronic display fabrication and assembly process. In anticipation of extensive usage in future crewed missions, preemptive examination of radiation tolerance of commercially available electronic displays serves to reduces the risk volatility posed by the inclusion of electronic displays in upcoming crewed mission (NASA HEOMD-405 Integrated Exploration Capability Gaps List Tier 1 Gap 02-02).
Notionally, an electronic display can be decomposed into 1) the pixel screen responsible for manipulation of light and 2) support electronics required to drive pixels on the screen to portray an image. Based on existing physics-of-failure understanding of radiation effects, the constituent components of an electronic display can be interrogated for its most susceptible degradation mechanisms. In the case of the screen, the primary degradation mechanisms are cumulative over the lifetime of the screen with the light emission layers (e.g. light emitting diodes) being sensitive to atomic displacements (displacement damage) and the thin-film transistor backplane and plastic/glass overlayers being susceptible to excess charge accumulation in oxides and passivation of charge centers (total ionizing dose). While the support electronics will still be susceptible to cumulative dose effects as well, the primary concern is the potential disruption caused by instantaneous charge injection from an individual high energy particle (single event effect). At a system level, this radiation-induced degradation in the screen presents as reduced output luminosity and potential color shift of display images while single event effects in the support circuitry can result in temporary and persistent visual distortions as well as unrecoverable electrical failure.
To further develop the foundation of radiation effects in electronic displays, a multi-institution collaboration has conducted initial heavy ion (single event effect) and 64 MeV proton irradiation (cumulative dose) test campaigns to 1) develop the characterization and analysis techniques for electronic displays and 2) collect test data for broadly assessing the susceptibility of display technologies. In accordance with the shift towards utilization of commercial-off-the shelf components and systems, it is pragmatic to evaluate a range of commercially available pixel technologies that could be selected by designers or original equipment manufacturers based on the trade-space of performance, cost, and resource requirements. From these test results, radiation-induced degradation in the pixel technology of organic light emitting diodes (OLEDs), backlight thin-film transistor liquid crystal displays (TFT-LCDs), and light emitting diode (LED) dot arrays was demonstrated and used to examine the significance of the red, green, and blue pixels degrading at distinct rates (non-uniform). Additionally, heavy ion tests allowed for cataloguing of non-destructive visual single event error signatures to better categorize error signatures as acceptable/unacceptable and preemptively develop and identify the software mitigation approaches for the computer systems that ultimately drive the electronic displays. The intent of this presentation is to socialize the necessity of radiation tolerant electronic displays for future crewed missions to the broader space computing community, outline characterization and analysis techniques utilized for characterizing radiation-induced degradation in human-interface applications, and summarize radiation test results from a cross-section of commercially available display technologies to grow the body-of-knowledge in anticipation of the need for reliable electronic displays for crewed missions.
Battery Management System for On-Board Data-Driven State of Health Estimation for Aviation and Space Applications
ABSTRACT. To ensure a safe and economically valuable operation of a battery system over the whole lifetime, a battery management system is used for measuring and monitoring battery parameters and controlling the battery system. Since the battery performance decreases over its lifetime, a precise on-board aging estimation is needed to identify significant capacity degradation endangering the functionality and safety of a battery system. Especially for aviation and space applications, this can result in catastrophic scenarios. Therefore, in this work, a generic battery management system approach is presented considering aerospace application requirements. The modular hardware and software architecture and its components are described. Moreover, it is shown that the developed battery management system supports the execution of data-driven state of health estimation algorithms. For this purpose, aging estimation models are developed that only receive eight high-level parameters of partial charging profiles as input without executing further feature extraction steps and can thus be easily provided by a battery management system. Three different neural network architectures are implemented and evaluated: a fully connected neural network, a 1D convolutional neural network and a long short-term memory network. It is shown that all three aging models provide a precise state of health estimation by only using the obtained high-level parameters. The achieved fully connected neural network provides the best tradeoff between required memory resources and accuracy with an overall mean absolute percentage error of 0.41%.
A High Performance Software Approach to enabling Artificial Intelligence On-Board for Image processing applications
ABSTRACT. New generations of spacecrafts are required to perform tasks with an increased level of autonomy. Space exploration, Earth Observation, space robotics, etc. are all growing fields in Space that require more sensors and more computational power to perform these missions. Furthermore, new sensors in the market produce better quality data at higher rates while new processors can increase substantially the computational power. Therefore, near-future spacecrafts will be equipped with large number of sensors that will produce data at rates that has not been seen before in space, while at the same time, data processing power will be significantly increased.
Use cases like guidance navigation and control applications, vision-based navigation has become increasingly important in a variety of space applications for enhancing autonomy and dependability. Future missions such as Active Debris Removal will rely on novel high-performance avionics to support image processing and Artificial Intelligence algorithms with large workloads. Similar requirements come from Earth Observation applications, where data processing on-board can be critical in order to provide real-time reliable information to Earth.
This new scenario of advanced Space applications and increase in data amount and processing power, has brought new challenges with it: low determinism, excessive power needs, data losses and large response latency.
In this article, a novel approach to on-board artificial intelligence (AI) is presented that is based on state-of-the-art academic research of the well known technique of data pipeline. Algorithm pipelining has seen a resurgence in the high performance computing work due its low power use and high throughput capabilities. The approach presented here provides a very sophisticated threading model combination of pipeline and parallelization techniques applied to deep neural networks (DNN), making these type of AI applications much more efficient and reliable. This new approach has been validated with several DNN models developed for Space application (including asteroid landing, cloud detection and coronal mass ejection detection) and two different computer architectures. The results show that the data processing rate and power saving of the applications increase substantially with respect to standard AI solutions, enabling real AI on space.