Using Rust and Nix for for faster flight software iteration loops
ABSTRACT. Writing flight software is already difficult, so why do we spend so much time-fighting dependency issues, setting up custom-built scripts, and tracking down obscure compatibility issues with dependencies? A vast amount of engineering time and energy is spent on these banal tasks. Two new tools promise a way out of this morass: Rust and Nix. Nix is something of an enigma, combining a package manager, build system, programming language, and Linux distros into one powerful tool. Rust, famous for its memory safety, likely needs no introduction. Beyond memory safety, Rust has a robust package ecosystem that focuses on interoperability. Rust packages like Serde, postcard, and embedded-hal make it easy to develop production-quality software quickly. Using Nix, new team members can set up their development environments in seconds. Cross-compiling and building full OS images is trivial with Nix. Together, Nix and Rust allow for rapid iteration and a streamlined developer experience, free from the pains of other tooling. This talk will outline how these techniques have been used in two upcoming missions: one in LEO and one in deep space.
ABSTRACT. The High-Performance Spaceflight Computing (HPSC) processor is a game-changer space compute solution that addresses computational performance, energy management and fault tolerance needs of NASA missions through 2040 and beyond. This presentation aims to provide a succinct overview of the program to the general public, outlining its key deliverables and the significant impact it is poised to have on the future of space computing and autonomous space missions. Attendees are cordially invited to participate in the forthcoming HPSC workshop for an in-depth exploration of the program's details.
This presentation will also provide a brief overview of the architecture and capabilities of the HPSC processor. Additionally, it will highlight the expanding ecosystem associated with the device.
Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls
ABSTRACT. For many space applications, traditional control methods are often used during operation. However, as the number of space assets continues to grow, autonomous operation can enable rapid development of control methods for different space related tasks. One method of developing autonomous control is Reinforcement Learning (RL), which has become increasingly popular after demonstrating promising performance and success across many complex tasks. While it is common for RL agents to learn bounded continuous control values, this may not be realistic or practical for many space tasks that traditionally prefer an on/off approach for control. This paper analyzes using discrete action spaces, where the agent must choose from a predefined list of actions. The experiments explore how the number of choices provided to the agents affects their measured performance during and after training. This analysis is conducted for an inspection task, where the agent must circumnavigate an object to inspect points on its surface, and a docking task, where the agent must move into proximity of another spacecraft and "dock" with a low relative speed. A common objective of both tasks, and most space tasks in general, is to minimize fuel usage, which motivates the agent to regularly choose an action that uses no fuel. Our results show that a limited number of discrete choices leads to optimal performance for the inspection task, while continuous control leads to optimal performance for the docking task.
Machine Learning in Space: Surveying the Robustness of on-board ML models to Radiation
ABSTRACT. Modern spacecraft are increasingly relying on machine learning (ML). However, physical equipment in space is subject to various natural hazards, such as radiation, which may inhibit the correct operation of computing devices. Despite plenty of evidence showing the damage that naturally-induced faults can cause to ML-related hardware, we observe that the effects of radiation on ML models for space applications are not well-studied. This is a problem: without understanding how ML models are affected by these natural phenomena, it is uncertain “where to start from” to develop radiation-tolerant ML software.
As ML researchers, we attempt to tackle this dilemma. By partnering up with space-industry practitioners specialized in ML, we perform a reflective analysis of the state of the art. We provide factual evidence that prior work did not thoroughly examine the impact of natural hazards on ML models meant for spacecraft. Then, through a “negative result,” we show that some existing open-source technologies can hardly be used by researchers to study the effects of radiation for some applications of ML in satellites. As a constructive step forward, we perform simple experiments showcasing how to leverage current frameworks to assess the robustness of practical ML models for cloud detection against radiation-induced faults. Our evaluation reveals that not all faults are as devastating as claimed by some prior work. By publicly releasing our resources, we provide a foothold—usable by researchers without access to spacecraft—for spearheading development of space-tolerant ML models.
Leveraging the Rust Programming Language for Space Applications
ABSTRACT. This work explores how to leverage the Rust programming language for space applications and remote system applications in general. It introduces a novel framework named sat-rs with the goal to simplify the work of engineers writing
on-board software for remote systems using Rust. A holistic approach is taken, covering the exploration of the existing ecosystem, the integration with ground systems, and the utilization of Rust’s distinctive language features to minimize the effort needed to create on-board software for remote systems.
Attack Surface Analysis for Spacecraft Flight Software
ABSTRACT. We examine ways to enhance cybersecurity in spacecraft operations by analyzing and reducing the attack surface of flight software. We advocate for reducing complexity in the software archtecture and adopting more secure architectural principles to mitigate vulnerabilities and make spacecraft more resilient against cyber attacks. Utilizing a systematic approach, we focus on the real-time operating system (RTOS) and operating system abstraction layer (OSAL) as key areas of scrutiny and development of mitigations. This study's findings suggest strategies for simplifying abstractions to make them more secure, addressing implementation issues, and providing supporting evidence for moving to a more resilient architectural approach.
ABSTRACT. Understanding the Interface Control Drawing (ICD) boundaries is essential for designing digital data links. Defining the framework of the digital bit stream and clear implementation of the digital signal constructs are two distinct functions that must be accomplished. This work compares LVDS with CML signal levels in the context of interfacing the SREDES outputs with inputs to photonic engines. Capable of delivering low jitter performance to link elements of differing distances and different data rates. Consideration is given to physical realization of 4 port and 16 port ethernet links supporting real time sensitive data processing requirements..
Using the IP Protocol Suite for Applications in Deep Space
ABSTRACT. Communications in space have been implemented as point to point, where a network has not been used or deployed. Compared to Internet, communications in deep space have very long delays, up to 40 minutes round trip to Mars for example, and intermittency of minutes to hours to days. 20 years ago, the Internet Protocol (IP) suite was identified as not suitable for space networking[RFC4838], so a complete new protocol stack based on the Bundle Protocol (BP)[RFC9171] has been designed. BP requires completely new and tailored routing, naming, security, API, applications and a complete new way to write applications. Since then, the IP protocol suite has evolved in various dimensions, such as for IoT, mobile and intermittent communications, with new protocols such as the QUIC and COAP. An initiative to reassess the use of the IP protocol suite in deep space is underway, where the whole stack, from IP to routing, to security to naming to transport to network management to applications is profiled for the deep space use. Reusing current IP-based protocols enables, for example, the use of HTTP REST APIs over deep space links, network management protocols such as Netconf and Yang, naming using DNS, etc… Therefore all the code and frameworks available today can be reused. However, these protocols and code usually have assumptions about network characteristics of the current well connected and fast Internet, which are invalid in deep space. This DeepSpaceIP initiative identifies these assumptions and define profiles for the protocols and applications to be usable in deep space. A testbed with simulated deep space communications characteristics is used to verify the applicability of these profiles. This presentation describes the rationale, proposal, architecture, profiles, most recent results and guidance to space application developers on how to use IP in deep space.
Versatile VPX Space Computing & Networking at Scale
ABSTRACT. Innoflight's Mission Processing Electronics (MPE) and Mission Networking Electronics (MNE) modular 3U VPX architectures offer unprecedented high-performance on-board processing (GPP, GPU, and FPGA), storage (up to 2 TB), networking (Ethernet switch and IP/MPLS router) and Input/Output (I/O) capabilities. These 2-slot (MPE-400 series) and 4-slot (MPE-600 series) VPX chassis modular solutions are ideal for payload/edge/AI processing, including Battle Management Command, Control & Communications (BMC3), mission data processing for advanced space sensors (IR, SAR/RF and hyperspectral), and networking applications, to name a few. Innoflight is already producing these products in large volumes, driven by the needs of the Space Development Agency (SDA) Proliferated Warfighter Space Architecture (PWSA) pLEO tranches and other missions.
ABSTRACT. SpaceFibre is a data link and network technology developed specifically for spacecraft on-board data-handling. It runs over electrical or fibre-optic cables, operates at very high data rates, and provides in-built quality of service, and fault detection, isolation and recovery capabilities. Because of these important characteristics, SpaceFibre is already flying in several spacecraft and being designed into over 60 more.
The key features of SpaceFibre are listed below:
• Very high-performance, e.g. 25 Gbit/s with a quad-lane link with each lane at 6.25 Gbit/s.
• Operates over electrical and fibreoptic media.
• High reliability and high availability using error-handling technology which is able to recover automatically from transient errors in a few microseconds without loss of information.
• Multi-lane capability providing increased bandwidth, rapid (few μs) graceful degradation in the event of a lane failure, hot and cold lane redundancy, and support for asymmetric traffic.
• Quality of service using multiple virtual channels across a data link, each of which is provided with a priority level, a bandwidth allocation and a schedule.
• Virtual networks that provide multiple independent traffic flows on a single physical network, which, when mapped to a virtual channel, acquire the quality of service of that virtual channel.
• Deterministic data delivery of information using the scheduled quality of service, in conjunction with priority and bandwidth allocation.
• Low-latency broadcast messages which provide time-distribution, synchronisation, event signalling, error reporting and network control capabilities.
• Small footprint which enables a complete SpaceFibre interface to be implemented in a radiation tolerant FPGA; for example, around 3% of an RTG4 FPGA for a typical instrument interface with two virtual channels.
• Backwards compatibility with SpaceWire at the network level, which allows simple interconnection of existing SpaceWire equipment to a SpaceFibre link or network.
• SpaceFibre is a data and control plane technology in the revised VITA 78 standard (SpaceVPX-2022) and a data-plane technology in the ADHA standard.
For instruments which have a modest data rate e.g. 200 Mbit/s, SpaceWire may seem to be the obvious choice for collecting the data from them, but the capabilities of SpaceFibre make it very attractive for interfacing to moderate (100 Mbit/s) data-rate instruments as well as those with high (1 Gbit/s), very high (10 Gbit/s) and extremely high data-rates (>>10 GBit/s).
This paper introduces SpaceFibre and then describe the WBS-VIII, a high-performance FFT-based spectrometer instrument processor designed for spaceflight applications which has modest output data-rates. It then explains why SpaceFibre was used as its data and control interface. Some of the facilities inherent in SpaceFibre, beyond the raw performance, are used to significant advantage. Particular attention is given to the SpaceFibre broadcast message capability and how that was able to simplify the software in the instrument control unit triggering and controlling the WBS-VIII, while also reducing the cable harness mass.
SpaceFibre: A High-Performance, High-Availability Interconnect for Space Applications
ABSTRACT. SpaceFibre is an open standard (ECSS-E-ST-50-11C, 2019) for high-performance, high-availability payload data-handling network technology for space applications. It is currently flying on at least six spacecraft and being designed is to around sixty more.SpaceFibre operates over electrical or fibre optic media and is backwards compatible with SpaceWire (ECSS-E-ST-50-12C) at the packet level. SpaceFibre provides high data-rates, building on the capabilities of Multi-Gigabit Transceivers (MGT) available in current FPGAs and ASIC. When the data-rate of a single-lane is insufficient several lanes can be used to form a multi-lane link. For example, a quad-lane link with a lane raw data-rate of 7.5 Gbit/s will provide a link raw data-rate of 30 Gbit/s. SpaceFibre provides high availability by recovering from transient errors rapidly (~3 µs), without loss of data and close to where the fault occurred, avoiding fault propagation. In a multi-lane link, should one lane fail, the link automatically reconfigures (taking ~2 µs once the fault has been detected) and continues to operate with the remaining lanes. Once again, this is done without loss of data. Hot or cold redundant lanes can be added to replace a faulty lane. SpaceFibre’s quality of service, which supports several virtual channels, each with priority, reserved bandwidth and schedule. If a lane in a multi-lane link fails, the quality of service settings determine which virtual channels are able to send data and which are held up due to the reduced bandwidth. These dynamic and fast error and fault recovery capabilities provide the high-availability of a SpaceFibre link. SpaceFibre has a small footprint and is straightforward to manage.SpaceFibre was developed by STAR-Dundee and University of Dundee, with inputs from international engineers, and funded by STAR-Dundee, European Union, ESA and UKSA.
INTRODUCTION:
Space avionics have leveraged commercial standards for decades. From VME in the 90’s and Compact PCI in the 2000’s, the next evolution of standardized Hi-Rel electronics is VPX. A standards committee is currently working on a variant of VPX specifically architected for space avionics applications. At the core of the avionics is the flight computer, typically a single board computer. An HPSC chip-based SBC, coupled with the emerging VPX standard for space, will enable new capabilities such as on-orbit autonomous decision making and AI/ML applications.
The intent is that this poster would touch on the following Topics of Interest:
HPSC SBC:
The HPSC processor will offer state of the art capability in terms of processing power, I/O connectivity, secure boot and operation, and radiation tolerance. The SBC built around the HPSC chip will conform to the emerging space VPX standard and supporting architecture.
VPX SPACE STANDARD:
Main features and benefits of the standard will be highlighted, including built-in redundant capabilities and chassis management concepts.
AVIONICS CHASSIS:
A notional avionics chassis, consisting of several standards based Plug-In Cards (PICs) and power supplies will be considered. Additionally, single string and redundant architectures will be explored.
APPLICATIONS:
Finally, high level applications such as spacecraft bus avionics and payload processing units will be discussed. The concept of a “software defined avionics” will also be touched on. The combination of a HPSC SBC and emerging space VPX standard will enable new and exciting capabilities for future projects.
ABSTRACT. Interoperability and scalability of robotic manipulators will be key to develop and sustain a lunar surface and cislunar ecosystem. From in-space servicing, assembly, and manufacturing (ISAM) to logistics, maintenance, and science operations, robotic manipulation is a critical NASA capability need and the demand for high-performance spaceflight computing will only rise as robotic tasks become more autonomous. With increased complexity, testbeds for research, feasibility studies, and technology demonstrations will be essential. The Dexterous Robotics Team at NASA Johnson Space Center has established multiple robotic manipulation testbeds taking a supervised autonomous remote operations approach and plans to infuse HPSC to emulate the flight environment and close the gap between space technology development and flight operations.
Evaluating a Cognitive Extension for LTP in a Spacecraft Emulation Testbed
ABSTRACT. In the domain of space communications, particularly in regions beyond cislunar space, the development of advanced networking solutions is essential to address the challenges posed by limited connectivity, substantial propagation delays, and radio signal variations. This study explores a data-driven intelligence approach to the Licklider Transmission Protocol (LTP), specifically focusing on dynamically adjusting the maximum payload size of segments. Prior research has emphasized the potential benefits of dynamically adjusting this parameter, introducing the concept of Cognitive LTP. This paper presents the software implementation of Cognitive LTP (CLTP) within an open-source Delay Tolerant Networking (DTN) framework, specifically the High-rate Delay Tolerant Networking (HDTN), and evaluates its performance under realistic space conditions. Leveraging the Cognitive Ground Testbed (CGT), developed by NASA GRC for spacecraft communication emulation, this study effectively bridges the gap between theoretical advancements and practical applications. By thoroughly analyzing CLTP's functionality within the CGT, this research offers insights into the practical implications of adaptive networking strategies, emphasizing the importance of conducting tests in relevant environments for the maturation of space communication technologies.
ABSTRACT. The Interplanetary Network (IPN) emerges as the backbone for communication between various spacecraft and satellites orbiting distant celestial bodies. This paper introduces the Interplanetary Network Visualizer (IPN-V), a software platform that integrates interplanetary communications planning support, education, and outreach. IPN-V bridges the gap between the complexities of astrodynamics and network engineering by enabling the generation and assessment of dynamic, realistic network topologies that encapsulate the inherent challenges of space communication, such as time-evolving latencies and planetary occlusions. Leveraging the power of Unity 3D and C#, IPN-V provides a user-friendly 3D interface for the interactive visualization of interplanetary networks, incorporating contact tracing models to represent line-of-sight communication constraints accurately. IPN-V supports importing and exporting contact plans compatible with established space communication standards, including NASA’s ION and HDTN formats. This paper delineates the conception, architecture, and operational framework of IPN-V while evaluating its performance metrics.
On the Role of Delay Tolerant Networks and Contact Graph Routing in Direct-to-Satellite IoT
ABSTRACT. This paper explores the integration of Delay-Tolerant Networking (DTN) and Contact Graph Routing (CGR) within Direct-to-Satellite Internet of Things (DtS-IoT) networks, utilizing the FLoRaSat discrete-event simulator based on Omnet++.
By incorporating a DTN model and the CGR algorithm, the study evaluates the efficacy of these technologies in optimizing data routing and handling across emerging Low-Earth Orbit (LEO) satellite networks. The research delves into various
satellite fleet configurations, including Star and Delta constellations, across different numbers of orbital planes and with the integration of opportunistic Inter-Satellite Links (ISLs). Results demonstrate that the DTN store-carry-and-forward approach, enhanced by CGR, significantly reduces end-to-end delivery
delays. Specifically, the implementation achieves an average end-to-end delivery delay as low as 10 minutes in 4-plane Star constellations with 24 satellites and immediate forwarding in 8-plane Delta constellations of equivalent size, underscoring
the potential of DTN and CGR to improve the efficiency and reliability of emerging DtS-IoT.
ABSTRACT. F Prime is a free, open-source and flight-proven flight software development ecosystem developed at the NASA Jet Propulsion Laboratory that is tailored for small-scale systems such as CubeSats, SmallSats, and instruments. F Prime comprises several elements: (1) an architectural approach that decomposes flight software into discrete components with well-defined interfaces that communicate over ports; (2) a C++ framework providing core capabilities such as message queues and an OS abstraction layer; (3) a growing collection of generic components for basic features such as command dispatch, event logging, and memory management that can be incorporated without modification into new flight software projects; and (4) a suite of tools that streamline key phases of flight software development from design through integrated testing.
Software Modeling using the F Prime Prime (FPP) Domain Specific Language
Component Implementation
Deploying to hardware and using the F Prime ground system
Advance enrollment is requested to confirm a seat at the tutorial. If you are interested in participating or have any questions, please email fprime@jpl.nasa.gov.
A novel method for rapid orbital deployment of ML for space applications
ABSTRACT. Demand for orbital image data is increasing at a pace much faster than down-link capacity to move this data to ground stations for processing. Space Edge Computing for Deep Learning (DL) based analysis of orbital image data (categorization/change detection) offers a promising solution. Dynamic deployment of DL models to space edge computing devices is desirable but significantly constrained by uplink bottlenecks, hardware limitations and power budgets. This paper proposes a selection methodology for dynamic deployment of DL models for an on-orbit context. Making use of the Once-for-all (OFA) framework, our proposed solution considers required Machine Learning (ML) accuracy performance, upload availability and hardware limitations for time-critical, earth observation scenarios. Groundstation aware orbital simulations are performed to determine maximum transmission size for a given time window to determines the maximum network size. This combined with space edge computing hardware limitations are used as input for a suitable OFA sub-network. In many scenarios tested this methodology resulted in model transmission in 1 less orbital period for a small decrease in top-1 accuracy.
Teledyne e2v Space Radiation Tolerant Data Processing Solutions
ABSTRACT. Teledyne e2v has a strong portfolio of Space Data Processing Solutions extensively qualified and characterized against radiation to address Edge Computing systems.In this presentation, Teledyne e2v will present the key features its Space ARM® based Multi-core Processors, Processing modules and High speed DDR4 memories that are qualified for Space.
Fault-Tolerant Space Weather Prediction: Leveraging Raw DSCOVR Data with Long Short-Term Memory in Machine Learning
ABSTRACT. In the realm of space weather forecasting, the emergence of our proposed Helios-LSTM algorithm signifies a ground breaking leap towards precision in predicting solar wind activity. With a paramount focus on the urgent requirement for accurate forecasts, this paper introduces cutting-edge deep learning model that not only monitors solar wind patterns but achieves an unprecedented 94% accuracy rate. Our proposed research stems from a meticulous integration of data from NASA’s Solar
Wind, Solar Radiation (CME), and Geomagnetic Storm APIs, culminating in a robust dataset designed for training our pro- posed novice model. Our methodology encompasses sophisticated data preprocessing techniques, leveraging hourly features from solar wind data and employing imputation strategies for missing values. The core of the model architecture includes a Bidirectional LSTM layer to capture nuanced temporal dependencies, three dense layers for comprehensive feature transformation, and a GRU layer to further enhance the analysis of solar wind activity. Trained on 29 features, our Helios-LSTM algorithm not only outperforms existing methods but also demonstrates its prowess in predicting solar wind patterns over varying time intervals from the last two hours to the last seven days. The significance of our research extends beyond the realm of solar wind forecasting,
as solar wind interactions with Earth’s magnetic field can trigger geomagnetic storms, presenting imminent risks to critical infrastructure. By forecasting the Disturbance Storm-Time Index (Dst), our novice model utilizes data from NASA’s ACE and NOAA’s DSCOVR satellites to unravel the complex relationships between interplanetary magnetic fields, solar wind plasma, and sunspot activity. Evaluation metrics such as root mean square error & coefficient of determination substantiate our proposed novel Helios-LSTM model’s efficacy in predicting geomagnetic storms. The outcomes not only offer invaluable insights for satellite operators, power grid managers, and navigation systems but also lay the foundation for a predictive model that safeguards Earth against the disruptive impacts of geomagnetic storms. Our research heralds a new era in space weather forecasting, providing decision-makers with a robust and timely tool to fortify essential systems and brace for geomagnetic disturbances
ABSTRACT. Chiplet has the potential to increase reliability of electronic architecture, enable scalability and quick reuse. But they also come with lots of design challenges with topology, placements, power consumption, thermal impact and latency. We will discuss the application of chiplets, the potential to accelerate space deployment and how Mirabilis Design is empowering systems engineers and architects to deploy faster.
ABSTRACT. Project AV (MAVIS) is an interdisciplinary university project undertaken by the SEDS-UPRM
chapter, focused on designing and manufacturing a semi-autonomous Mars rover to compete in
the University Rover Challenge (URC). As the first team from the University of Puerto Rico to
develop a semi-autonomous rover and robotic arm, MAVIS represents a milestone in collaborative
innovation across diverse disciplines.
The MAVIS rover, inspired by the Sherpa Rover, consists of six major sub-assemblies: wheels,
steering, suspension, chassis, robotic arm, and science suite. Constructed mostly using additive
manufacturing techniques, MAVIS measures 0.80m in height and 1.05m in length and width,
weighing 50.43kg with payloads. The chassis, designed with an octahedral shape, integrates the
suspension at corners and features optimized floors for efficient electrical component arrangement.
Additionally, it includes a front payload bay for interchangeable installation of the robotic arm and
science suite. The chassis, primarily made of aluminum sheets, weighs 6.45kg and offers strategic
dimensions for rover stability and operational versatility.
MAVIS's suspension system, is composed of two aluminum control arms and a stainless-steel
spring, ensures stability and maneuverability on diverse terrains, supporting a ground clearance of
0.275m at a 45-degree operational position. The steering system seamlessly integrates into the end
of the suspension, enabling both active and passive modes for versatile navigation. For the wheels,
the team developed an airless tire, crafted with thermoplastic polyurethane (TPU) and Nylon 6
components, featuring a unique "M" tread design for enhanced traction and obstacle traversal
capabilities. An anti-deformation barrier prevents tire damage, crucial for mission success in
challenging terrain.
One of the two payloads, the robotic arm, boasts five degrees of freedom for high dexterity tasks
in Extreme Delivery and Equipment Servicing Missions. The arm is made predominantly from
Nylon 6 and Nylon 11 to ensure a design that is both strong and lightweight. The arm's end-
effector, driven by linear actuation, ensures precise object manipulation and task execution. ROS
and RViz are used to provide the control station with real-time visualization and accurate position
for the arm during operation and a GUI was developed to precisely control the arm’s movements.
The science suite, as the second payload, houses spectrometry mechanisms and sample collection
systems for in-situ analysis and life-detection tasks. MAVIS's onboard stereo camera aids in
geological feature analysis, guiding soil sample collection and subsequent ATP bioluminescence
and fluorescence spectrometry analyses for microbial activity and environmental assessments.
Powering MAVIS is a 22.2V 22000mAh LiPo battery, managed by a comprehensive power
distribution board (PDB) for optimal energy distribution. The PDB interfaces with essential
components, including motor systems for steering and arm movements, with planned
implementation of a robust battery monitoring system for enhanced operational insights.
The software integration via ROS enables autonomous navigation, sensor fusion, and seamless
communication, supported by Ubiquity devices for local area network connectivity and remote
operations. MAVIS's Graphical User Interface (GUI) streamlines monitoring, command
execution, and automation scripts, enhancing operational efficiency and mission success
probabilities.
ABSTRACT. Motiv Space Systems intends to utilize the High-Performance Space Computing (HPSC) processor's generational leap in space-qualified computational capability to drive the next generation of smart payloads, with a focus on robotic manipulation systems. Motiv's space rated modular manipulation platform, the xLink, provides a powerful and flexible hardware platform for the HPSC to control across a wide range of operating conditions: from on-orbit servicing, assembly, and manufacturing to lunar infrastructure construction. The xLink’s 7-DOF configuration has high accuracy joint torque sensing, class-leading millimeter repeatability, and a 6-DOF force-torque sensor, enabling precise dexterous operations as well as the potential for force and compliance control needed for contact dynamics operations in many on-orbit manipulation tasks. The HPSC will act as a high level controller for the plethora of sensors and actuators in the xLink system, and will be used to execute cutting edge algorithms encompassing sensor fusion, vision-based control, and other areas that enable in-space operations.
ABSTRACT. F Prime is a free, open-source and flight-proven flight software development ecosystem developed at the NASA Jet Propulsion Laboratory that is tailored for small-scale systems such as CubeSats, SmallSats, and instruments. F Prime comprises several elements: (1) an architectural approach that decomposes flight software into discrete components with well-defined interfaces that communicate over ports; (2) a C++ framework providing core capabilities such as message queues and an OS abstraction layer; (3) a growing collection of generic components for basic features such as command dispatch, event logging, and memory management that can be incorporated without modification into new flight software projects; and (4) a suite of tools that streamline key phases of flight software development from design through integrated testing.
Software Modeling using the F Prime Prime (FPP) Domain Specific Language
Component Implementation
Deploying to hardware and using the F Prime ground system
Advance enrollment is requested to confirm a seat at the tutorial. If you are interested in participating or have any questions, please email fprime@jpl.nasa.gov.
Emerging Threats of AI-Integration in Space User Segment: A Reference Architecture and Attack Tree Analysis
ABSTRACT. As the space sector expands with new types of satellites, orbital systems and services, the user segment faces escalating security threats. This segment delivers crucial services for enabling interactions between users and space systems, highlighting the need for strong security mechanisms as attack surfaces widen and become more sophisticated. In particular, the adoption of artificial intelligence (AI) in the space domain brings new attack vectors that traditional methods cannot address. To systematically analyze this emerging threat landscape, this paper develops a reference architecture to model the user segment’s components, communications and processes. We specifically assess the AI-impact on attack surface by constructing attack trees for Earth Observation scenarios with and without AI integration, using dedicated space and AI threat modeling frameworks (i.e, SPARTA and ATLAS). By comparing threats and impacts between these attack trees, we determine the unique security challenges introduced by exploiting uses of AI. These insights contribute priorities for security strategies to defend against evolving AI-driven threats, as well as specify the caveats of AI-integration in the space user segment.
ABSTRACT. The deployment of machine learning (ML) algorithms in an increasing requirement in many space craft, despite the heavy computational requirements. In this talk, the challenges around this deployment will be discussed, with comparison with ground based options. The technology required to implement these solutions in the Space environment will be discussed and the off-the-shelf reference designs and development platforms that Alpha Data provide to enable customers to achieved such solutions in FPGA and Adaptive SoC devices will be presented, including an update on the latest AMD Versal based cards.
Transfer Learning with Synthetic Satellite Imagery
ABSTRACT. The field of satellite imagery suffers from scarce
availability of open datasets that can be used to develop novel
algorithms. One of the most recent open datasets, the RarePlanes
dataset, provides real satellite images with an excellent resolution
and hand-made annotations. The RarePlanes dataset provides
satellite images of real aircraft parked along runways. In the
context of training deep convolutional neural networks (CNNs),
the RarePlanes dataset has a class imbalance where some aircraft
classes are sufficiently represented, and others suffer from a short
supply of annotated instances. With this pitfall in mind, the
RarePlanes dataset included synthetic data that can be used to
compensate for problems raised during CNN training. This report
assesses the use of synthetic satellite imagery to improve CNN
training of real satellite images using the transfer learning (TL)
technique. TL with synthetic satellite imagery is compared against
TL with Common Objects in COntext (COCO) dataset and
against the case of no-TL with randomly initialized weights.
Results indicate that TL with synthetic satellite imagery provides
better results when applied to real satellite imagery supporting the
use of synthetic data for real data in CNN applications.
Advances in High-rate Delay Tolerant Networking On-board the International Space Station
ABSTRACT. The High-rate Delay Tolerant Networking (HDTN)
project at the NASA John H. Glenn Research Center (GRC) is
developing a performance optimized Delay Tolerant Networking
(DTN) implementation which is able to provide reliable multigigabit
per second automated network communications for near-
Earth and deep space missions. To that end, this paper provides
an overview of the testing and integration efforts leading toward
future infusion of HDTN with the International Space Station
(ISS). Over the past year, the HDTN team has performed a
series of end-to-end tests between the Software Development and
Integration Laboratory (SDIL) at the Lyndon B. Johnson Space
Center (JSC) and Marshall Space Flight Center’s Huntsville
Operations Support Center (HOSC). The testing has focused
on a realistic emulation of the ISS Ku-band RF link, which
operates at a maximum of 500 Mbps downlink with a 600 ms
round-trip time. In this environment, the HDTN onboard gateway
has been tested for interoperability with ISS payload nodes and
the DTN ground gateway, store and forward capability, reliable
transport using the Licklider Transmission Protocol (LTP), and
successful recovery from unexpected loss of signal. In addition
to integration testing, HDTN has developed a series of software
engineering practices to ensure the stability and maturity of the
implementation. As the result HDTN is preparing to service a
variety of flight missions, the first of which is in support of the
ISS high-rate communications.
Rapid Development and Automated Testing of Dependable Heterogeneous Multi-node Space Systems with Open Source Tools
ABSTRACT. As the complexity of space-faring hardware and software increases, so does the significance of extensive automated testing, traceability and a dependable, collaborative development environment. This presentation will discuss the application of the Renode simulation framework, System Designer, Remote Device Fleet Manager and other open source tooling developed by Antmicro in the context of our customers’ space use cases, and the ways in which an open source software-driven development approach leads to better, reliable devices – with faster turnaround. We will present various aspects of the design and verification process of a modular, heterogeneous multi-node OBC system involving multiple architectures (such as Arm, RISC-V and LEON), Linux and RTOS nodes, and soft FPGA IP. Special focus will be given to complex system testing in simulation using Renode and how it enables SW/HW co-verification and integration testing. This scalable methodology, based on proven open source solutions, translates to faster time-to-market, and is successfully being applied in several current missions.
ABSTRACT. Introduction
The GR716B is a radiation-hardened mixed-signal microcontroller specifically designed for spacecraft avionics. The GR716B sets itself apart from other microcontroller solutions through the performance provided and the number of interfaces supported. The GR716B is suitable for implementation of distributed control, bridging between communication buses, DC/DC control applications, FPGA and COTS supervision, and as a replacement of FPGAs in terminal units. The device entered manufacturing in December 2023. The presentation will describe the overall functionality and application examples.
Architecture
Based on a LEON3FT processor and two real-time accelerators (RTA), the GR716B integrates 192 KiB on-chip RAM (with EDAC) and fault-tolerant memory controllers. The LEON3FT features single cycle instructions execution and data fetch from the on-chip RAM memory. Execution determinism is guaranteed by the deterministic instruction execution time and fixed interrupt latency. The system operating frequency can be set up to 100 MHz.
The microcontroller includes an embedded ROM with boot loader, a dedicated SPI memory interface with 4-byte addressing support and also an 8-bit SRAM/PROM fault tolerant memory controller capable of accessing up to 16 MiB ROM and 32 MiB SRAM.
I/O interfaces include a 2-port SpaceWire router, Ethernet, MIL-STD-1553B, CAN FD, PacketWire, PWM, SPI, UART, I2C, and GPIO. The analog functions include radiation hardened cores such as DACs and ADCs, analog comparators, precision voltage reference, power-on reset, brownout detector, low drop-out regulator (LDO), LVDS transceivers, PLL and all active parts for a crystal oscillator (XO). All functionality is designed for total irradiation dose of 300 krad(Si), and analog performance including the precision voltage reference is designed for 100 krad(Si).
Software Ecosystem
The GR716B's Software Development Environment (SDE) includes bare metal driver support, an instruction-level simulator, and a debugger. The Zephyr open-source RTOS is being ported to the GR716B, expanding its software compatibility.
Applications
The GR716B is equipped with dedicated hardware designed to support at least four independent digitally-controlled DC/DC converters. It can also accommodate complex switching power converters, including various full-bridge topologies. The execution of real-time capabilities for DC/DC applications is ensured through the close integration of the RTAs with hardware functionalities such as integrated ADC, DAC, analog comparators, among others.
Additionally, the GR716B incorporates GRSCRUB, an FPGA configuration supervisor responsible for both programming and scrubbing the FPGA configuration memory. This feature aims to prevent the accumulation of errors over time. Compatible with the Kintex UltraScale and Virtex-5 AMD/Xilinx FPGA families, the core can be configured to scrub either the entire FPGA configuration memory or a specific subsection. GRSCRUB interfaces with the FPGA through the SelectMap interface.