SMC-IT/SCC 2023: SMC-IT/SCC 2023
PROGRAM FOR TUESDAY, JULY 18TH
Days:
next day
all days

View: session overviewtalk overview

08:30-09:15 Session 2: Keynote: Laurie Leshin
08:30
Dare Mighty Things Together 

ABSTRACT. In its 86-year history, the Jet Propulsion Laboratory (JPL), NASA’s only federally funded research and development center, has been driving the forefront of scientific discovery for the benefit of humanity. From launching the very first American satellite in 1958 to landing rovers on Mars, from fighting climate change to discovering thousands of other worlds – JPL dares mighty things by imagining and then achieving what others might think impossible. With a full manifest of missions, the future is bright at JPL as it seeks to have an expanded positive impact on the space ecosystems for decades to come.

09:15-10:00 Session 3: Keynote: Katherine Bouman
09:15
Exploring Space to Further Extract Science from Black Hole Images

ABSTRACT. The first images of light bending around a black hole, captured from the ground-based Event Horizon Telescope (EHT), has unlocked a new extreme laboratory of gravity that has already begun ushering in a new era of studying precision black hole physics on horizon scales. This talk will present the methods and procedures used to produce the first images of the M87 and Sagittarius A* black holes, but also highlight how remaining scientific questions motivate us to improve this computational telescope to see black hole phenomena still invisible to us. Space-based approaches may address these pressing questions in black hole science over the coming decades. In particular, we will discuss three broad classes of mission architectures: involving one or more orbiters in a) low- or b) medium-Earth orbit, and involving a single spacecraft at c) a considerable distance from Earth. The two former architectures would provide for rapid filling of an otherwise sparse aperture in order to increase image fidelity, while the latter architecture would provide extreme angular resolution. We discuss how these mission architectures, along with advances in analysis techniques, could help address the following science questions: testing theories of gravity, understanding jet formation and launching, and understanding black hole growth. This talk will also briefly discuss future directions currently being pursued and how we are developing techniques that will allow us to extract the evolving structure of a black hole over the course of a night in the future, perhaps even in three dimensions.

10:15-11:45 Session 4A: Workshop: Addressing Assurance Challenges in Space Autonomy
10:15
Autonomy for Space Robots: Past, Present, and Future

ABSTRACT. Autonomy for Space Robots: Past, Present, and Future

10:45
TBA

ABSTRACT. TBA

11:15
Increasingly Autonomous Perception and Decision Systems for Advanced Air Mobility

ABSTRACT. Advanced Air Mobility (AAM) including passenger transport and Uncrewed Aircraft Systems (UAS) requires autonomy capable of safely managing contingency responses as well as routine flight. This talk will describe pathways from aviation today to a fully autonomous AAM of the future. Research toward comprehensive low-altitude flight environment mapping will be summarized. Assured Contingency Landing Management (ACLM) requires a pipeline in which hazards/failures that risk loss of vehicle controllability or loss of landing site reachability trigger contingency response. Pre-flight preparation of contingency landing plans to prepared landing sites is supplemented by online planning when necessary. Dynamic airspace geofencing in support of UAS Traffic Management (UTM) will be defined and compared with traditional fixed airspace corridor solutions. The talk will conclude with a high-level mapping of presented aviation solutions to space applications.

10:15-11:45 Session 4B: Workshop: Applications for In-Space Assembly and Servicing
Location: Noyes(Small)
10:15
Brief Introduction on Workshop

ABSTRACT. Brief Introduction on Workshop

10:20
In Space Servicing, Assembly, and Manufacturing Current Status and Envisioned Future

ABSTRACT. For several decades, NASA has employed in-space systems to enhance the performance and extend the useful life of operational orbital assets. In at least one case, an operational mission was not only enhanced, but enabled – the International Space Station was made possible by crewed and robotic in-space assembly and continues to support installation and operation of new science and technology payloads. In several cases (Hubble Space Telescope, Intelsat 401, Westar and Palapa), major operational assets were rescued or repaired soon after launch when otherwise mission-ending anomalies occurred or were detected. In addition to the original rescue, Hubble was upgraded four times, enabling high-demand, world class science over four decades. More recently, two Northrop Grumman Mission Extension Vehicles have captured two Intelsat spacecraft near the end of their life and fuel capacity, to take over maneuvering duties.

Despite these recent operational achievements, and except for large human exploration vehicles and large space telescopes, space architects rarely consider in-orbit servicing and assembly capabilities in their future planning. Technologies such as multi-launch mission architectures (and rendezvous and proximity operations systems), docking systems, external robotics, advanced tools, modular systems and structures, and fluid transfer systems are available today to support these missions. In-space manufacturing will soon be operational to enable resilient missions that recover from on-orbit failures and expand the utilization of space. We envision a future that includes these capabilities, and discuss the cultural, engineering, and technological challenges to achieving this vision. We discuss the vision and the status of the space industry’s slow but steady march to widespread operational use of in space servicing, assembly, and manufacturing.

10:45
Demonstrating In-space Assembly and Servicing Technologies on the Ground and in Space

ABSTRACT. Next generation space science missions can utilize in-space servicing, assembly, and manufacturing (ISAM) to enable and enhance the architectures needed to answer the key scientific questions of the future. This talk will summarize the work NASA’s Goddard Space Flight Center, in partnership with our government, industry, academic, and international partners, is doing to advance some of the necessary ISAM technologies needed for these missions. Software and hardware-in-the-loop simulations on the ground evaluate designs and prepare for and support on-orbit operations. Space-based demonstrations are conducted as part of the Robotic Refueling Mission and Raven on the International Space Station, as well as the On-orbit Servicing, Assembly, and Manufacturing-1 mission in Low Earth Orbit. An overview of these simulations and demonstrations will be discussed as part of the presentation along with some of the lessons learned.

11:10
In-Space Servicing as a Scientific Capability

ABSTRACT. As every Hubble-hugger knows, the ability to service that space observatory made it into a multi-generational telescope, with the longevity of major mountaintop observatories on the ground. But it was the ability to replace science instruments with new ones incorporating more capable designs and technologies that really kept Hubble on the cutting edge of astrophysics and planetary science for over three decades. The observatory was transformed with every servicing mission and Hubble today studies objects that weren’t even known to exist when the telescope was launched.

Arguably, in-space servicing and upgrading is a scientific capability in its own right, one which allows us to re-invent a mission after design and launch. NASA has embraced this philosophy for the Habitable Worlds Observatory, the future large space telescope recommended by the National Academies’ Astro2020 Decadal Survey. Furthermore, we may think about going beyond Hubble-style servicing into new mission development approaches that take advantage of in-space assembly and manufacturing. In this talk, I’ll give a recap of Hubble servicing and highlight the new science enabled after each servicing mission, then move on to opportunities and challenges for servicing Habitable Worlds Observatory. Finally, I’ll briefly discuss ideas for using in-space assembly and/or manufacturing to enable other transformative science missions.

10:15-11:45 Session 4C: Workshop: 4th Augmented, Virtual, and Mixed Realities
Location: Gates Annex
10:15
Workshop Introduction and Welcome

ABSTRACT. Workshop Introduction and Welcome

10:30
AR / VR and DE State of Industry: Where We Are and Where We Are Heading

ABSTRACT. This briefing will provide a broad overview of the state of the industry surrounding augmented reality, virtual reality and the digital engineering ecosystem, specifically focusing on elements related to space system development. A glimpse of where industry is heading within the next 5-10 years, what it means for the IT landscape, and how to prepare to leverage its benefits.

11:00
eXtended Reality for Enhanced Lunar Exploration Missions

ABSTRACT. The harsh nature and conditions of the lunar environment make robots indispensable for lunar activities. Future lunar missions will explore potentially hazardous areas, such as shaded regions and ice-rocky polar areas, which could jeopardize missions if rovers become immobilized or damaged. Due to this, present and future activities rely on close monitoring and human supervision throughout their operational cycle. Even if autonomous solutions are used, humans are still needed to supervise these operations, as mission safety is of utmost importance. Furthermore, to increase efficiency, reliability and security, lunar rovers carrying out these missions require humans in the loop to teleoperate them. However, teleoperating in space presents a significant challenge due to various limiting factors, including communication barriers, the unstructured nature of the environment, and limited resources. Additionally, current approaches fail to provide enough information such that the operators understand their environment and feel immersed inside it while teleoperating and exploring an area. Reliance on cameras aboard rovers for navigation in unfamiliar environments with poor or variable illumination requires much time and caution. Instead, virtual reconstruction of the environment using sensors, such as RGB-D cameras and LiDAR, could improve operator understanding and spatial awareness, leading to more efficient and robust teleoperation. To this end, we develop a system to enhance lunar teleoperation by implementing advanced monitoring and control methods, using eXtended Reality (XR) and Artificial Intelligence (AI) based on Robot Operating System (ROS) and the Unity3D game engine, see Figure 1. The proposed system is tested in an analogue lunar facility, the LunaLab at the University of Luxembourg, where the rover uses the RGB-D camera input for a visual SLAM algorithm, acting as our mapping tool to recreate a point cloud of the lunar environment. As the point cloud is procedurally generated, its data is sent to Unity3D for 3D reconstruction using a triangulation technique. The environment is mapped by an operator wearing a VR rig, which allows them to teleoperate the rover and access various telemetry data via a Graphical User Interface (GUI). Furthermore, the operator can inspect and move around the virtual 3D reconstructed environment while it is being procedurally generated. This approach effectively provides a higher degree of immersion than traditional solutions. Finally, AI models that detect rocks in the rover’s vicinity and terrain characteristics are used, permitting us to label the 3D reconstructed environment with additional data for enhanced operational awareness.

An explainable video using the developed technology with a real robot in a lunar analogue facility (LunaLab) is available at the following link: https://www.youtube.com/watch?v=XokCArhHUdQ

10:15-11:45 Session 4D: SCC: Computing Architectures

Computing Architectures: reconfigurable computing systems, high performance space computing, fault-tolerant design, system on a chip and embedded memories, GPU-based computing, effective use of many-core processor platforms; heterogenous computing, in-memory computing, performance analysis, benchmarking.

10:15
Radically Advancing the Capabilities of Space-based Computing

ABSTRACT. The space compute solution NASA/JPL flies today has architectural origins dating all the way back to the mid 1990’s.  With NASA/JPL missions on a trajectory to employ higher and higher levels of autonomy in the coming years to meet our science and exploration objectives, the current space compute solution does not have the capability to well-serve that trajectory.  This presentation covers the major limitations of current space compute solutions, when viewed through the lens of powering the level of autonomy that next-generation space exploration is likely to require.  And flipping that around, this presentation also covers what the key needs are for a modern space compute solution, from our perspective – again considering autonomy as a foundation.

10:45
High-Performance Embedded Space Computing System-on-Chip for space imaging spectrometer

ABSTRACT. System-on-a-chip (SoC) devices promise lighter, smaller, cheaper, more capable and more reliable space electronic systems. This paper describes the focal plane interface electronics – digital (FPIE-D) Xilinx Zynq-based data acquisition, cloud-screening, compression, storage and downlink computing system developed by the Jet Propulsion Laboratory (JPL), Alpha Data, Correct Designs and Mercury Systems for imaging sprectometers such as the NASA Earth Surface Mineral Dust Source Investigation (EMIT). EMIT is an imaging spectrometer that acquires 1280 cross-track by 328 band images at 216 images/sec. Following launch (14 July 2022), EMIT has been installed outside the International Space Station (ISS) and is collecting data from science targets in arid dust source regions of the Earth. EMIT will be used to study the mineral dust cycle which has multiple impacts on the Earth System. The science objective of EMIT is to close the gap in our understanding of mineral dust heating and cooling impact on the Earth now and in the future by determining the surface mineralogy of mineral dust sources. The FPIE-D board design is based on a standard Alpha Data COTS Zynq7100 board in an XMC form factor. The FPIE-D Alpha Data hardware and components, including a Mercury Systems 440 GByte RH3440 Solid-State Data Recorder (SSDR), fit into a 280mm×170mm×40mm assembly. The FPIE-D peak power usage is 40 W. The computing element is a Xilinx Zynq Z7100 which includes a Kintex-7 FPGA and dual-core ARM Cortex-A9 Processor. The COTS board was re-spun to make it suitable for space (replacing components with space grade equivalents) and to add features needed for the mission. The FPIE-D board is designed to be very flexible, and not specific to EMIT mission. The FPIE-D assembly with its Zynq SoC controls the other assemblies on the EMIT instrument. The FPIE-D Zynq Processing System is responsible for running the flight software, which includes command & data handling, command & telemetry with ISS over 1553 and science data downlink over a 7.4 Mbps Ethernet interface to the ISS. The Zynq Programmable Logic (PL) of the FPIE-D interfaces with the SSDR through a 3.125 Gbps Serial RapidIO interface. The SSDR alleviates the effect of two data rate bottlenecks in the FPIE-D System: data compression implemented on Zynq PL with a data compression (input) rate of 370 Mbps, and the data transfer to the ISS at 7.4 Mbps. The FPIE-D includes three processing elements implemented in the Zynq PL: (1) the Fast Lossless extended (FLEX) data compression block (a modified implementation of the CCSDS-123.0-B-2 recommended standard), which is providing 3.4:1 lossless compression (compared to 16 bit samples obtained after co-adding) and 21 MSamples/sec throughput; (2) co-adding capability of two successive images so that shorter exposures can be used, helping to avoid saturation of the Focal Plane Array during acquisition; and (3) cloud detection and screening so that cloudy images can be dropped prior to compression, saving SSDR space and downlink time.

11:15
Sponsored Talk (Avalanche): True Mission Adaptability Enabled by Combination of new Adaptive SoCs and Idealized Memory Buffers

ABSTRACT. Small changes can have seismic impact. Recent advancements in microelectronics are enabling transformation of satellite capabilities (or technology) on par with the transition from the flip phone to the iPhone. The resulting adaptability and autonomy could be the key to satellite network resilience, be it natural or manmade.

10:45-11:45 Session 5: SCC: Components, Radiation, Packaging

Components, Radiation, and Packaging: emerging component, module, and packaging technologies that will advance space computing capabilities; radiation test methods for and results on complex components; use of COTS parts in high-reliability applications.

10:45
Microcircuit Standards for Space Missions

ABSTRACT. NASA Electronic Parts Assurance Group (NEPAG) operates under the Mission Assurance Standards and Capabilities (MASC) division of NASA Office of Safety and Mission Assurance (OSMA). All NASA missions, large or small, are important to mission assurance. The success of each mission counts. This presentation will describe the efforts underway where NEPAG has worked with DLA (Defense Logistics Agency), JC-13 (the manufacturers of government products), and CE-12 (the users of active devices) committees to ensure current military/aerospace standards address many challenges, one example being the insertion of new technology, the Class Y initiative, Class Y represents advancements in packaging technology, increasing functional density, and increasing operating frequency. The front runner Class Y suppliers are offering functions such as processors, application specific integrated circuits, and very high-speed analog to digital converters.

11:15
Sponsored Talk (Aril): A High Speed, Reliable, Low Voltage SRAM for Efficient Compute

ABSTRACT. Efficient computing is critical to optimize size, weight and power for next generation space missions. Operating the logic and SRAM technology at the same voltage and leveraging dynamic voltage and frequency scaling can deliver maximum performance and efficiency but is often limited by voltage capability, speed and reliability of the SRAM macros. A new SRAM technology, optimized on state-of-the-art CMOS technology, targets high speed, robust/reliable operation over a wide voltage range from 0.45V to 0.8V and above. The high speed, low voltage SRAM technology can be combined with efficient microarchitecture and optimized physical design to maximize performance per watt.

13:30-15:30 Session 6A: Workshop: Addressing Assurance Challenges in Space Autonomy
13:30
TBA

ABSTRACT. TBA

14:00
So you want to put a neural network in an airplane... Are you crazy?

ABSTRACT. Machine learning (ML) technologies are being investigated for use in the embedded software for manned and unmanned aircraft. ML will be needed to implement advanced functionality for increasingly autonomous aircraft and can also be used to reduce computational resources (memory, CPU cycles) in embedded systems. However, ML implementations such as neural networks are not amenable to verification and certification using current tools and processes. This talk will discuss current efforts to address the gaps and barriers to certification of ML for use onboard aircraft. We will discuss new verification and assurance technologies being developed for neural networks. This includes formal methods analysis tools, new testing methods and coverage metrics, and architectural mitigation strategies, with the goal of enabling autonomous systems containing neural networks to be safely deployed in critical environments. We will also discuss the new certification guidance that is under development to address the gaps in current processes. The overall strategy is to start will approvals of low-complexity and low-criticality applications, and gradually expand to include more complex and critical applications that involve perception.

14:30
Assurance of Learning-enabled Autonomous Systems

ABSTRACT. Significant advances have been made in the last decade in constructing autonomous systems, as evidenced by the proliferation of a variety of unmanned vehicles. These advances have been driven by innovations in several areas, including sensing and actuation, computing, modeling and simulation, but most importantly deep machine learning, which is increasingly being adopted for real-world autonomy. In spite of these advances, deployment and broader adoption of learning techniques in safety-critical applications remain challenging. This talk will present some of the challenges posed by the use of these techniques towards assurance of system behavior, and summarize advances made in DARPA’s Assured Autonomy towards establishing trustworthiness at the design stage and providing resilience to the unforeseeable yet inevitable variations encountered during the operation stage. The talk will also discuss related work in creating frameworks for assurance driven software development.

15:00
Operational Test and Evaluation for Safety-Critical Autonomous Systems: Progress, Challenges, and Opportunities

ABSTRACT. Safety certification of autonomous vehicles is a major challenge due to the complexity of the environments in which they are intended to operate. In this talk I will discuss recent work in establishing the mathematical and algorithmic foundations of test and evaluation by combining advances in formal methods for specification and verification of reactive, distributed systems with algorithmic design of multi-agent test scenarios, and algorithmic evaluation of test results. Building on previous results in synthesis of formal contracts for performance of agents and subsystems, we are creating a mathematical framework for specifying the desired characteristics of multi-agent systems involving cooperative, adversarial, and adaptive interactions, develop algorithms for verification and validation (V&V) as well as test and evaluation (T&E) of the specifications, and perform proof-of-concept implementations that demonstrate the use of formal methods for V&V and T&E of autonomous systems. These results provide more systematic methods for describing the desired properties of autonomous systems in complex environments and new algorithms for verification of system-level designs against those properties, synthesis of test plans, and analysis of test results.

13:30-15:30 Session 6B: Workshop: Applications for In-Space Assembly and Servicing
Location: Noyes(Small)
13:30
Panel Discussion

ABSTRACT. Panel Discussion

14:30
Architectural Considerations for Servicing the Habitable Worlds Observatory

ABSTRACT. There are ongoing efforts to develop plans for launching and operating the Habitable Worlds Observatory approximately two decades from now. There is a desire to utilize emerging commercial servicing capabilities to robotically service and maintain the observatory. However, there currently are significant architectural questions regarding how the observatory should be built to facilitate servicing and how it can be effectively serviced. While engineering expertise and judgment will play a crucial role in making these decisions, current projections about future technological advancements may also impact these choices. There is a risk that the uncertainty surrounding the maturity of these technologies could lead to either exaggerated or underestimated claims about their impact. Hence, one perspective suggests that it could be advantageous to draw upon the experiences gained from the James Webb Space Telescope mission and make minimal modifications to its architecture when considering future servicing options. Conversely, the NASA In-space Assembled Telescope (ISAT) study suggests a contrasting approach to the observatory's architecture, emphasizing a more granular architecture by relying heavily on in-space robotic assembly. This presentation will introduce architectural aspects that seek to strike a balance between the projected servicing and assembly capabilities while considering the heritage of JWST, in order to address the unique challenges of the Habitable Worlds Observatory.

14:55
Possibilities and challenges for forging a path to serviceable observatories at L2

ABSTRACT. Habitable worlds Observatory - NASA's response to the 2020 decadal survey calling for a 6 m optical/UV/IR telescope to search for habitable exoplanets and to launch in the early 2040s. Astrophysics Division Chief, Dr. Mark Clampin, has laid out key tenants for success in developing HWO. He states that a key aspect to ensuring funding and science performance of this future Great Observatory is on-orbit serviceability. Perhaps the HWO could be much like a "mountain top" Observatory, where the fundamental structure is in place for decades of life with expendable and upgradable systems. Upgrades to systems can allow for extended life in the harsh environment of space and allow for a continued science relevancy. But, serviceability must also serve the here and now, enabling ground assembly, test and repair to achieve greater pre-launch efficiencies. What are the unique needs of servicing Great Observatories at L2? This talk will present possibilities and challenges for forging a path to serviceable observatories at L2.

13:30-15:30 Session 6C: Workshop: 4th Augmented, Virtual, and Mixed Realities
Location: Gates Annex
13:30
IMAP Spacecraft and Instruments in Virtual Reality

ABSTRACT. Launching in 2025, the Interstellar Mapping and Acceleration Probe (IMAP) mission investigates two of the most important issues in space physics today — the acceleration of energetic particles and interaction of the solar wind with the interstellar medium. In this talk, we present the IMAP VR software which provides an interactive experience demonstrating the spacecraft's science instruments with an animation of how it will collect data, visualized in a model of its operational space environment. The software combines real scientific data from IBEX and mechanical models of IMAP with VR technology to create a simulated environment for exploring and understanding the spacecraft's mission. The talk will discuss the design and implementation of the software, its educational and scientific applications, and its potential for advancing space exploration.

14:00
AR / VR and DE: Current Challenges to Adoption

ABSTRACT. As a precursor to our panel discussion, this presentation will outline the existing challenges of adopting AR / VR technologies, spatial computing, and Digital Engineering (DE) best practices. Using these broad challenges, we will transition into a panel discussion with selected speakers.

14:30
Group Discussion

ABSTRACT. Group Discussion

13:30-15:00 Session 6E: SCC: Components, Radiation, Packaging
13:30
Modernization of FPGA Risk Analysis for Critical Space Applications

ABSTRACT. New methods for characterizing FPGA performance and risk in space-radiation environments are presented. Application of the new methods are illustrated via walking through a NASA Mission use case. A requirement for the mission is to “work-through” a worst-week radiation environment with minimal ground intervention. Mitigation insertion becomes a necessity but is limited due to device capacity. This presentation shows that old test and evaluation methods are insufficient while new methods provide better characterization and assistance for determining suitable design/mitigation strategies.

14:00
GR765 SPARC and RISC-V Multiprocessor System-on-Chip

ABSTRACT. The GR765 is a radiation-tolerant and fault-tolerant octa-core system-on-chip that is currently in development. During SCC 2021 we described the status of the architecture definition and the development of GR765 engineering samples. Since then, development has progressed, and several extensions have been made to the architecture. Most notably the GR765 is now a design that implements both the SPARC instruction set architecture and the RISC-V instruction set architecture. The selection between the two different processor core architectures is done through a bootstrap signal.

14:30
Lot-to-Lot Variability and TID degradation of Bipolar Transistors Analyzed with ESA and PRECEDER Databases

ABSTRACT. The NewSpace era has drastically increased the use of COTS (Commercial-off-the-shelf Components) to cover the needs of the new requirements: lower costs, shorter lead times, and better performances. However, the radiation risks associated with non-radiation hardened components are especially relevant in this context. Therefore, new approaches must be considered necessary to address this challenge for the assurance of radiation hardness. This work presents standard and parameterized radiation databases and how they can be used to numerically assess the critical variability from lot to lot in response to gamma radiation based on the coefficient of variation.

13:30-15:30 Session 6F: SCC: Computing Architectures
13:30
Performance Evaluation of the Radiation-Tolerant NVIDIA Tegra K1 System-on-Chip

ABSTRACT. Radiation-hardened (rad-hard) processors are designed to be reliable in extreme radiation environments, but they typically have lower performance than commercial-off-theshelf (COTS) processors. For space missions that require more computational performance than rad-hard processors can provide, alternative solutions such as COTS-based systems-on-chips (SoCs) may be considered. One such SoC, the NVIDIA Tegra K1 (TK1), has achieved adequate radiation tolerance for some classes of space missions. Several vendors have developed radiationtolerant single-board computer solutions targeted primarily for low Earth orbit (LEO) space missions that can utilize COTSbased hardware due to shorter planned lifetimes with lower radiation requirements. With an increased interest in spacebased computing using advanced SoCs such as the TK1, a need exists for an improved understanding of its computational capabilities. This research study characterizes the performance of each computational element of the TK1, including the ARM Cortex-A15 MPCore CPU, the NVIDIA Kepler GK20A GPU, and their constituent computational units. Hardware measurements are generated using the SpaceBench benchmarking library on a TK1 development board. Software optimizations are studied for improved parallel performance using OpenMP for CPU multithreading, ARM NEON for single-instruction multiple-data (SIMD) operations, Compute Unified Device Architecture (CUDA) for GPU parallelization, and optimized Basic Linear Algebra Subprograms (BLAS) software libraries. By characterizing the computational performance of the TK1 and demonstrating how to optimize software effectively for each computational unit within the architecture, future designers can better understand how to successfully port their applications to COTS-based SoCs to enable improved capabilities in space systems. Experimental outcomes show that both the CPU and GPU achieved high levels of parallel efficiency with the optimizations employed and that the GPU outperformed the CPU for nearly every benchmark, with single-precision floating-point (SPFP) operations achieving the highest performance.

14:00
Performance modeling of a heterogeneous computing system based on the UCIe Interconnect Architecture

ABSTRACT. In a heterogeneous computing environment, there exist a variety of computational units such as multicore CPUs, GPUs, DSPs, FPGAs, Analog modules, and ASICs. IP Vendors, Engineers, and Scientists working with heterogeneous computing systems face numerous challenges, including integration of IP Cores and components from different vendors, system reliability, hardware-software partitioning, task mapping, the interaction between compute and Memory, and reliable communication. For advanced designs, the industry typically develops a system-on-a-chip (SoC), where different functions are shrunk at each node and pack them onto a monolithic die. But this approach is becoming more complex and expensive at each node. Another way to develop a system-level design is to assemble complex dies in an advanced package. Chiplets are a way of modularizing that approach. Chiplets can be combined with other chiplets on an interposer in a single package. This provides several advantages over a traditional system on chip (SoC) or integrated board, in terms of reusable IP, heterogeneous integration, and verifying die functional behavior. In our work, a system-level model composed of chiplets- IO Chiplet, Low Power Core Chiplet, High-Performance Core Chiplet, audio video Chiplet, and Analog chiplet, are interconnected using Universal Chiplet Interconnect Express (UCIe) standard. We looked at different scenarios and configurations including advanced and standard packages, different traffic profiles, sizing of resources, and Retimer to extend the reach and evaluate events on timeout. We were able to identify the strengths and weaknesses of UCIe interconnect in the scope of mission applications and obtain the optimal configuration for each of the subsystems to meet the performance, power, and functional requirements.

14:30
Sponsored Talk (Star-Dundee): SpaceFibre for Spaceflight Payload Data-Handling and Efficient Inter-Processor Communication

ABSTRACT. SpaceFibre is a high-performance, high-reliability and high-availability datalink and network technology designed specifically for demanding payload data-handling applications. The capabilities and characteristics of SpaceFibre will be described and a typical application architecture summarised. With the recent addition by STAR-Dundee of Remote Direct Memory Access (RDMA) capabilities, SpaceFibre is also suitable for low-overhead inter-processor and multi-processor communications. SpaceFibre RDMA will be introduced.

15:00
Optimization of ARM Processor Architecture for Space Computing Performance with SAMRH7x MCU Family

ABSTRACT. The constant increase in complexity of space applications leads to the ongoing research of more powerful computation capabilities. This includes more integrated system-on-chip (SoC) solutions. There is a strong push focused on increasing computing performance while integrating the main peripheral functions of the processing solutions embedded in the aerospace systems.

Conventional thinking places the processor IP core performance as the primary concern and therefore the temptation is to increase the number of cores. However, is the inherent performance of the controller the main driver that increases overall system performance? This presentation is intended to highlight the fact that, processor MIPs capability is not the only factor in an application and that an advanced architecture can provide much more benefit to optimize system performance than the pure computing capability of a processor.

15:00-15:30 Session 7: SCC: Extreme Environments
15:00
Silicon-Carbide Hybrid One-Bit Microcontroller in a Transport-Triggered Architecture

ABSTRACT. Silicon (Si)-based semiconductor microcomputing has been core to manned and unmanned exploration of the solar system. However, Si-based are limited in operation given that they cannot adequately function in extreme high temperature and radiation environments. In contrast, Silicon Carbide (SiC) semiconductor electronic devices have the potential of bringing electronics functionality to extreme radiation and temperature environments physically beyond the reach of Si semiconductor devices. In particular, SiC integrated circuits based on Junction Field Effect Transistor (JFET) technology have produced the world’s first microcircuits of moderate complexity that have demonstrated sustained operation at 500˚C [1]. A major distinguishing aspect of this NASA Glenn Research Center (GRC) JFET Integrated Circuit (IC) work is the long-term durability (greater than a year) of these circuits and packaging at 500°C [2] and for 60 days in simulated Venus surface conditions [3]. Other NASA GRC work has shown operation of circuits across a total temperature range from low (-190°C) to high temperatures (961°C), a span more than 1000˚C [4]. Further, TID radiation testing to 7 Mrad(Si) without failure was conducted on earlier-generation of legacy SiC JFET logic chips [5]. No other IC approaches have accomplished this level of high temperature durability even for less-complicated circuits. These properties enable the potential for improved capability for exploration across the solar system, from Ocean Worlds to the interior of gas giants to surface of Venus or Mercury. Although these SiC electronics are presently comparable to that of standard-environment commercial electronics in the ~1970s, such electronics nevertheless enabled historic breakthroughs during the Viking and Voyager missions. The maturity of SiC electronics has now advanced to where a SiC microprocessor with unprecedented extreme environment durability can be built. NASA GRC is presently prototype fabricating a next generation family of SiC microcircuits. Specifically, this fabrication run, denoted as “Gen. 12”, aims to produce the first ever practical digital processing chipset hardened for durable operation in broad temperature range, high radiation environments. This paper describes that microprocessor. The limited component availability and complexity from this prototype chipset mandates an ultra-simple, augmentable computing topography. The approach is to use design methods implemented in earlier electronics. The result is a hybrid programmable logic controllers (PLC)-like logic core operating inside of a transport triggered architecture. Programmable logic controllers (PLC) from the 1960-1970s were configurable devices used in industry to replace the hardwired relationships between sensors and actuators with software-based reading of those sensors and software-based commanding of actuation. In particular, this SiC microprocessor design implements an efficient foundational 8-bit Transport-Triggered Architecture (TTA) processor core with 1-level stack that is designed to be packaged/interfaced with SiC Read-Only Memory (ROM) and Random-Access Memory (RAM), and other supporting peripheral SiC ICs also being prototyped in the Gen. 12 processing run. The physical IC layout of the microprocessor uses “Gen. 12” process rules [6] is spread across two separate chip designs that will be interconnected into a microprocessor unit at the package/board level. An overview of the design, fabrication, and testing of this microprocessor will be provided. It is concluded that while this microprocessor provides unique and game changing capabilities, future generations of this technology will enable even more capable tools for planetary exploration with significant terrestrial applications.

1. https://www1.grc.nasa.gov/research-and-engineering/silicon-carbide-electronics-and-sensors/technical-publications/ 2. D. J. Spry, P. G. Neudeck, L. Chen, D. Lukco, C. W. Chang, G. M. Beheim, M. J. Krasowski, and N. F. Prokop (2016) “Processing and Characterization of Thousand-Hour 500 °C Durable 4H-SiC JFET Integrated Circuits”. Additional Conferences (Device Packaging, HiTEC, HiTEN, & CICMT): May 2016, Vol. 2016, No. HiTEC, pp. 000249-000256. http://dx.doi.org/10.4071/2016-HITEC-249. 3. P. Neudeck, L. Chen, R. D. Meredith, D. Lukco, D. J. Spry, L. M. Nakley, and G. W. Hunter, “Operational Testing of 4H-SiC JFET ICs for 60 Days Directly Exposed to Venus Surface Atmospheric Conditions”, IEEE Journal of the Electron Devices Society, vol. 7, pp. 100-110, (2018). 4. P. G. Neudeck, D. J. Spry, M. J. Krasowski, N, F. Prokop, and L. Chen, “Demonstration of 4H-SiC JFET Digital ICs Across 1000 °C Temperature Range Without Change to Input Voltages”, Materials Science Forum, vol. 963, pp. 813-817, 2019 5. J. M. Lauenstein, P. G. Neudeck, K. L. Ryder, E. P. Wilcox, L. Y. Chen, M. A. Carts, S. Y. Wrbanek, and J. D. Wrbanek, “Room Temperature Radiation Testing of a 500 °C Durable 4H-SiC JFET Integrated Circuit Technology”, Proceeding of the 2019 Nuclear and Space Radiation Effects Conference Data Workshop, San Antonio, TX, 2019. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8906528 6. P. Neudeck and D. Spry, “Graphical Primer of NASA Glenn SiC JFET Integrated Circuit (IC) Version 12 Layout”, 2019. https://ntrs.nasa.gov/citations/20190025716

15:45-16:45 Session 8A: Workshop: Addressing Assurance Challenges in Space Autonomy
15:45
TBA

ABSTRACT. TBA

16:15
Closing Remarks Day 1

ABSTRACT. Closing Remarks Day 1

15:45-16:45 Session 8B: Workshop: Applications for In-Space Assembly and Servicing
Location: Noyes(Small)
15:45
Increasing Mission Success and Service Life through Robotic Servicing

ABSTRACT. The promise of recoverable missions, increased performance through upgrades, and lengthened service life through in-flight servicing has been proven on the Hubble Space Telescope and the International Space Station. In both cases, crew and robotics have recovered the mission from failures, improved performance and enabled a 30+ year productive mission life. Robots played a small but critical part in the Hubble servicing missions but their use and operational maturity has grown significantly over the life of the ISS. Safely operated by personnel on the ground, all ISS dexterous servicing operations are performed in a supervised autonomous fashion, including ‘unprepared’ servicing tasks such refueling with legacy fill/drain valves and mating 38999 power and data connectors. This briefing illustrates the maturation of servicing robotic operations beginning with Hubble, through ISS and Orbital Express, and looking ahead to Gateway. It provides examples of the servicing capabilities available to Observatory designers that deliver a reliable method for recovering from surprises, lengthening productive life and opportunistically increasing performance.

16:10
Utilizing Standardized Interfaces for Servicing Space Missions

ABSTRACT. The Space Shuttle missions, and the subsequent assembly of the International Space Station (ISS) provide myriad examples and benefits of designing space assets for serviceability. The benefits of serviceability are more easily realized with the use of standard interfaces. Common interfaces on the ISS permit the transfer of critical resources on-orbit, from Orbital Replacement Units (ORUs) comprising power bays, batteries, and instruments, to refueling and free-flyer capture.

Consortium for Execution of Rendezvous and Servicing Operations (CONFERS) is an industry-led initiative that identifies and leverages best practices from government and industry to develop standards for In-Space, Assembly, and Manufacturing (ISAM).

CONFERS Technical Working Groups (CTWGs) identify proven interfaces, the resources transferred at each interface and then develop standards and guidelines for implementation. While heritage interfaces continue to be used on current and future missions, the international collaboration fostered by CONFERS permit the modification and adaptation of such interfaces, given the foundation of knowledge and establishment of standards for flight readiness and mission success.

The Next Great Observatory could leverage the commonality and standards developed by CONFERS in order to ensure the science community is a primary benefactor of ISAM advancements.

16:35
Ending Remarks

ABSTRACT. Ending Remarks

15:45-16:45 Session 8C: Workshop: 4th Augmented, Virtual, and Mixed Realities
Location: Gates Annex
15:45
Workshop Recap, Closing Remarks, & Action Items

ABSTRACT. Workshop Recap, Closing Remarks, & Action Items

16:00
Demo of VR Spacecraft Magnetic Field Visualization

ABSTRACT. This demonstration will showcase a prototype VR tool that visualizes Spacecraft Magnetic Fields for Europa Clipper and Psyche. Understanding a spacecraft magnetic field is important to conduct electromagnetic compatibility tasks so that scientific sensors (e.g., magnetometers) can successfully read signals of interest. While previous approaches visualize spacecraft magnetic fields as static 2D images or animated videos, this tool allows for real-time interactive VR viewing of the field. The tool allows users to adjust several parameters for how the field line tracing algorithm runs, including adjusting starting points and placing new magnetic sources.

17:00-17:45 Session 9: Sponsored Talk: Microchip

Title: Enabling Scalable Computing Solutions for Space Applications with the Microchip portfolio

Abstract: 

As a leading provider of microcontrollers, microprocessors and FPGAs for space applications, Microchip is at the center of spacecraft systems, providing advanced space-qualified computing solutions. This presentation will provide an overview of our scalable Microchip portfolio capable of supporting space applications from the traditional space missions with extreme radiation and quality requirements, to new space solutions which require flexibility and cost efficiency.

Speakers: TBC

17:45-19:00 Session 10: Reception/Poster Session

Poster Session

Location: Dabney
Voice Control System for Human Spaceflight

ABSTRACT. Voice Control has been identified as a potential human-computer interface (HCI) candidate for Human Spaceflight. It may be a viable HCI solution to address the risk that the Human Research Program has identified for future space missions-small crews, complex systems to control, and long communication delays with mission control. At NASA, voice control is being investigated to aid crew members in extravehicular activity (EVA) procedures, automate command/control environments, interface with RFID chips to query for misplaced items, space-to-ground communication, speech-to-text transcription, and substitution for ground support. While speech recognizers are becoming more common in Earth-bound environments, space flight can present a unique and challenging physical & acoustical environment. Due to the time delays to/from the moon and more so Mars, terrestrial-type server farm solutions in space are not possible. Rather, every artificially intelligent system must be 100% offline in a cloudless solution - where 100% offline is defined as not dependent on a server farm such as used for Siri, Alexa, or Google.

Not only is a lack of a cloud-based speech control infrastructure a challenge to overcome in space flight, but current and future spaceflight missions do not have the luxury of extensive network bandwidth as we do on Earth. Network constraints requiring minimum bandwidth consumption often result in lower quality voice communications. Voice communication is deemed a critical component in every human spaceflight mission, yet crew member speech that has been compressed and coupled with various acoustic background noise can be difficult to understand. If compressed and noisy audio is processed by a speech recognition system, it may be prone to errors.

When building voice control for human spaceflight, a system needs to denoise crew member speech that includes high background noise, stationary and non-stationary acoustic noise, and adapts for reverberation within different environments. Reverberation varies between space suits with helmet bubbles, tight spaces inside rovers, or crew member habitat construction on different planets making common speech recognition difficult to adapt to the diverse acoustical environments. Current commercially available speech recognition engines are not tuned to spacecraft acoustic environments. To make matters more difficult, there is typically limited training data available which is necessary to improve performance within spacecraft and spacesuit systems. For extra-vehicular activities, a helmet creates a challenging reverberant acoustical environment that can smear spoken words-making it problematic for the speech recognition system to understand what was said. Therefore, commands, acronyms, background noise, and other acoustical factors make voice control in human spaceflight a unique problem to overcome.

Finally, another key challenge to overcome is radiation hardness. Any on premise solution for speech recognition that is offline and works in Earth doesn’t necessarily mean it will work in space due to radiation effects on the computer processing unit (CPU). Current radiation tolerant CPUs pose a challenge for voice control due to lack of processing power and speed for a query to be processed.

This presentation will cover key challenges when developing voice control applications for human spaceflight. Next, the efforts that various teams at the Johnson Space Center are undergoing to find viable solutions for voice control in deep space missions are given. Finally, an explanation on why voice control is imperative to human spaceflight if mankind wants to engage in deep space missions. For instance, a spaceflight specific voice control system could be most useful when ground communication has a long latency or is nonexistent and crew members need artificially intelligent aid or ground support substitution to perform long duration missions.

AI and Data-Driven In-situ Sensing for Space Digital Twin

ABSTRACT. The formation and evolution of giant planets define the dominant characteristics of our planetary system. The giant-planet exploration can improve the understanding of heat flow, radiation balance, chemistry and can work as ground truth for exoplanets. Atmospheres of giant planets are larger and, in many respects, but simpler than that of Earth. Studying giant planets’ atmospheres and environments can serve as laboratories for the Earth atmosphere's fundamental physical and dynamical processes. On the other hand, exploring the relevant environments that affect the Earth's atmosphere can help us develop a sound technical and scientific basis in giant planets. Particularly, climate change on Earth is central to the question of understanding the roles of physics, geology, and dynamics in driving atmospheres and climates on Jupiter.

While Juno Mission has significantly enhanced our understanding of the Jovian atmosphere in every orbit through remote sensing, in-situ observations are essential for validating the models, studying the composition, and capturing the dynamic processes of gas giants. The singular in-situ observation made by the Galileo Probe in 1995 has a major disagreement with standard Jovian atmospheric models, making scientists believe the entry site may happen to be one of the least cloudy areas on Jupiter. This suspicion exhibits a strong limitation of free-fall probing. A logical next step, then, would be a mission with actively controllable probes that stay in the atmosphere of a gas giant for an extended duration. The 2016 NIAC Phase I study on ``WindBots'' by Stoica et al. found that an adaptive wing glider and a lightweight, quasi-buoyant vehicle would be a viable option by obtaining a lift from updrafts, however, autonomously navigating in the highly uncertain and turbulent flow field remains a major challenge.

The closest terrestrial analog to Jupiter's atmosphere would be the tropical cyclones (TCs) on the Earth. Although the formation mechanism of a TC is very different from GRS on Jupiter, the resulting phenomenon, represented by highly turbulent and strong wind field as well as its localized and dynamic nature, are similar, hence providing a suitable test ground for future missions to Jupiter's atmosphere. This paper develops an autonomous small unmanned aircraft system (sUAS) in-situ TC sensing, through the simulated environment of cooperative control of distributed autonomous multiagent systems deployed into the eye of TC. The main objective is to test various observing systems from single and multiple sUAS platforms close to the eyewall of the storm to capture essential measurements to be used in explaining the Jovian environment. Preliminary results demonstrated successful sUAS flight optimization for maximizing improvement of the quality of key measurements (e.g., 3-dimensional wind velocity, pressure, temperature, and humidity). Simulated sUAS flight toward the inner core of the TC boundary layer made high-resolution meteorological observations and supplemented existing partial knowledge for a better estimate of the TC intensity. This was a great addition to the task of testing sUAS technology, but a few methods have focused on the critical region of the storm environment and no data-driven optimal system design was developed to gain more information through exploring target locations. Anticipatory sUAS routing lowered the overall energy usage and maximized the reduction of forecasting error by exploring and sampling unobserved cells along the path to a target location.

This paper analyzes how online updating from sUAS collected meteorological data would benefit hurricane intensity forecasting considering the temporal variation of the uncertainty. Unobserved heterogeneity and randomness of the data have shown multiple modes in probability distribution in each location. A huge collection of multiple sources of granular microscopic data in each location may result in the loss of multivariate information if not retrieved properly. It is important to quantify the uncertainty of prior belief and update posterior when critical observations are obtained. However, traditional entropy theory cannot handle i) sequential learning multimodal multivariate information; ii) dynamic spatiotemporal correlation; iii) importance of observation for posterior approximation. In this paper, we advance autonomous in-situ sensing under highly uncertain and turbulent flow through a multi-variate multi-modal learning by analyzing the similarities between the different types of mixture distributions of multiple variables and allocating a cluster to each group across high dimensional time and space. Specifically, this paper can track structural information flow of temporal multi-variate multi-modal correlation data and automatically update the posterior, with the weights to the new observations through an iterative process. Extensive experiments on hurricane ensemble forecasting data demonstrate the superior performance of our method over the state-of-the-art baselines across various settings.

Sequential Deep Learning for Mars Autonomous Navigation

ABSTRACT. Recent advances in computer vision for space exploration have handled prediction uncertainties well by approximating multimodal output distribution rather than averaging the distribution. While those advanced multimodal deep learning models could enhance the scientific and engineering value of autonomous systems by making the optimal decisions in uncertain environments, sequential learning of those approximated information has depended on unimodal or bimodal probability distribution. In a sequence of information learning and transfer decisions, the traditional reinforcement learning cannot accommodate the noise in the data that could be useful for gaining information from other locations, thus cannot handle multimodal and multivariate gains in their transition function. Still, there is a lack of interest in learning and transferring multimodal space information effectively to maximally remove the uncertainty. In this study, a new information theory overcomes the traditional entropy approach by actively sensing and learning information in a sequence. Particularly, the autonomous navigation of a team of heterogeneous unmanned ground and aerial vehicle systems in Mars outperforms benchmarks through indirect learning.

Formulating a cost function with an appropriate valuation of information is necessary when knowledge of the information in one time and/or space gives conditional attributes of information in another time and/or space. This model, Sequential Multimodal Multivariate Learning (SMML) outputs informed decisions, conditioned on the cost of exploration and benefit of uncertainty removal. For instance, given an observable input, SMML is trained to infer posterior from samples taken from the same multimodal and multivariate distribution, approximate gains, and make optimal decisions. The utility is the usual metric to be optimized based on the difference between prior and posterior tasks, and it tells us how well the model is improving the data distribution after observation. Recall that, in general, it does not suffice to learn from average values like in the standard reinforcement learning problem to solve this kind of task, for this reason, SMML extends the capabilities of deep learning models for reinforcement learning whose reward for each action is restricted to unimodal or univariate distributions. In highly uncertain conditions, this reduction of entropy is vital to any optimization platform employed in the robust, efficient, autonomous exploration of the search space. To overcome Shannon's limitation in multimodal learning, we consider both standard deviation and entropy. We target cells with the highest importance of information distinguishing two cells with identical entropy, but different values of information.

Predictive routing is effective in knowledge transfer, however, ignore information gained from probability distributions with more than one peak. Consider a network with a grid laid on top, where each cell represents a small geographical region. To find an optimal route from an origin cell to a destination, forecasting the condition of intermediate cells is critical. Routing literature did not use a location's observed data to forecast conditions at distant non-contiguous locations' unobserved data. We aggregate the data from all the grid cells and cluster cells that have similar combinations of probability distributions. When one cell of a cluster is explored, the information gained from the explored cell can partially remove uncertainty about the conditions in distant non-contiguous unexplored cells of the same cluster. With this new framework, we explore the best options to travel with partial, sequential, and mixture of information gain.

We use observations obtained en route to infer the most likely conditions in unobserved locations. While distant unobserved locations may not share any inherent correlation with locally observed locations, classification errors by the image classifier may be correlated with certain image features found in different locations on the Martian surface. By clustering pixels with similar classifications, we gather evidence en-route that either supports or fails to support the hypothesis that the image classifier is making correct classification of the different terrain types. The two-step process (clustering - posterior) updates the state estimation of un-observed locations when navigating Jezero region of Mars environments in which prior belief is provided but contains high uncertainty. During 55 out of 100 runs in the Monte Carlo simulation the optimal expected travel time path based on the prior SPOC map resulted in the rover becoming stuck due to misclassification. Real-world misclassifications are expected to be lower compared with the ground truth map used in this research. Without considering runs in which the rover would have been stuck using the prior map, the median travel time was improved by 1 hour using the posterior. Travel time in the worst case outlier performance is improved by 2.32 hours and 75% quantile performance is improved by 1.61 hours. Over a mission spanning many months, this saving adds significant additional time for scientific experiments.

Development of a Nanosatellite System Modeling Architecture for EIRSAT-1

ABSTRACT. This paper discusses the adaptation of Model-Based Systems Engineering (MBSE) into CubeSat lifecycle development. This adaptation involves transitioning from traditional satellite design practices to a model-based design approach by developing system models required for analysis, trade-offs, verification and validation (V&V). This approach has been applied to the launch-ready CubeSat, EIRSAT-1, which has been designed and developed by students and staff at University College Dublin (UCD), Ireland. The model contains several key features and components of EIRSAT-1. This poster presents the integration and verification of the communication link between the spacecraft and the Ground Segment (GS) within an MBSE framework.

Benchmark Computer Performance for Wavefront Sensing and Control on Next Generation Space Telescopes

ABSTRACT. Future planned space telescopes, such as the HabEx and LUVOIR telescope concepts and the recently proposed Habitable Worlds Observatory, will use high contrast imaging and coronagraphy to directly image exoplanets for both detection and characterization. Such instruments will achieve the ~10^10 contrast level necessary for Earth-like exoplanet imaging by controlling thousands of actuators and sensing with thousands of pixels. Updates to the wavefront control actuators will need to be computed within seconds, placing unprecedented requirements on the real-time computational ability of radiation hardened processors. In this work we characterize the wavefront sensing and control algorithms to estimate the performance based on publicly available benchmark performance for currently available space-rated processors.

Coverage-guided State Space Exploration of Autonomous Cyber-Physical Systems

ABSTRACT. Autonomous Cyber-Physical Systems (CPS) play a substantial role in many domains, such as aerospace, transportation, critical infrastructure, and industrial manufacturing. However, despite the popularity of autonomous CPS, their susceptibility to errant behavior is a considerable concern for safety-critical applications.

Testing and simulation is the most common method used in practice to ensure correctness of autonomous CPS. This is because of its ability to scale to complex systems. In many domains, CPS complexity and scalability have been exponentially growing and will continue to expand in the future due to rapid integration with machine learning components and growing autonomy level such as unmanned aerial vehicles or self-driving cars.

Traditional software test methodologies extensively depending on code coverage are expensive, difficult to manage, and ineffective in verifying CPS behavior. Moreover, these test methodologies suffer flexibility absence where dynamical CPS control requirements and plant parameters are evolving through continuous state space and time.

We investigate ways to improve automated test case generation for autonomous CPS by Coverage Guided State Space Exploration, which systematically generates trajectories to explore desired (or undesired) outcomes. For this, we introduce a novel coverage metric and integrate this metric with various techniques, such as fuzz testing or model predictive control (MPC) to generate test cases.

Lot-to-Lot Variability and TID degradation of Bipolar Transistors Analyzed with ESA and PRECEDER Databases

ABSTRACT. The NewSpace era has drastically increased the use of COTS (Commercial-off-the-shelf Components) to cover the needs of the new requirements: lower costs, shorter lead times, and better performances. However, the radiation risks associated with non-radiation hardened components are especially relevant in this context. Therefore, new approaches must be considered necessary to address this challenge for the assurance of radiation hardness. This work presents standard and parameterized radiation databases and how they can be used to numerically assess the critical variability from lot to lot in response to gamma radiation based on the coefficient of variation.

Development of a GPU based Single Board Computer for Space Applications

ABSTRACT. Modern Space missions are requiring more autonomy and on-board processing capabilities. To accomplish this, high performance computers that can operate in harsh space environments (vibration, thermal, and radiation) is required. The poster will explore processor, form factor standard, architecture, and analysis decisions made during the development of a GPU based Single Board Computer for space applications.

The intent is that this poster would touch on the following Topics of Interest: Components, Radiation, and Packaging Current Rad Hard by Design processors may not provide the performance needed for on-board processing. Qualifying modern commercial processors for use in space applications is one path to solve the performance bottleneck. This may involve radiation testing of the processor and associated DDR memories along with a PEMs qualification and screening to gain confidence the parts will operate in the space environment. The poster would briefly describe radiation testing performed on the GPU device.

Computing Architectures Using a commercial processor requires mitigation techniques to manage and recover from SEUs and SEFIs. The poster would briefly describe the choices made and implementation of the fault mitigation approach.

Avionics Adopting an industry standard for board form factor and protocols allows the board to be integrated into higher level systems providing modularity and scalability. VITA 78 SpaceVPX was chosen to provide these features. Additionally, leveraging open-source software tools and libraries allows for fast development and test efforts. The poster would briefly describe industry standards utilized to enable modularity and scalability.

Machine Learning/Neural Computing The poster would briefly describe the benefits of using GPUs for AI/ML applications and the advantages of having this capability on the satellite.

[Space Robotics Workshop] Resilient Exploration And Lunar Mapping System: A ROS 2 Multi-Robot SW Solution for Lunar Exploration

ABSTRACT. The Resilient Exploration And Lunar Mapping System 2 (REALMS2) continues the REALMS project [1]. It focuses on exploring and mapping lunar environments with special focus on resilience using ROS 2. REALMS2 uses multiple sensors for more robust mapping results and a multi-robot architecture based on a Mesh network communication system. Multiple rovers can interact, exchange data and act as a relay for other rovers to communicate to the base station by using their ad-hoc Mesh network.

[Space Robotics Workshop] CSP2Turtle: Verified Turtle Robot Plans

ABSTRACT. Software verification is an important approach to establishing the reliability of critical systems. One important area of application is in the field of robotics, as robots take on more tasks in both day-to-day areas and highly specialised domains. Our particular interest is in checking the plans that robots are expected to follow to detect errors that would lead to unreliable behaviour. Python is a popular programming language in the robotics domain through the use of the Robot Operating System (ROS) and various other libraries. Python’s Turtle package provides a mobile agent, which we formally model here using Communicating Sequential Processes (CSP). Our interactive toolchain CSP2Turtle with CSP models and Python components enables plans for the turtle agent to be verified using the FDR model-checker before being executed in Python. This means that certain classes of errors can be avoided, providing a starting point for more detailed verification of Turtle programs and more complex robotic systems. We illustrate our approach with examples of robot navigation and obstacle avoidance in a 2D grid-world. We evaluate our approach and discuss future work, including how our approach could be scaled to larger systems.

[Space Robotics Workshop] Formal Modelling and Runtime Verification of Autonomous Grasping for Active Debris Removal

ABSTRACT. Active debris removal in space has become a necessary activity to maintain and facilitate orbital operations. Current approaches tend to adopt autonomous robotic systems which are often furnished with a robotic arm to safely capture debris by identifying a suitable grasping point. These systems are controlled by mission-critical software, where a software failure can lead to mission failure which is difficult to recover from since the robotic systems are not easily accessible to humans. Therefore, verifying that these autonomous robotic systems function correctly is crucial. Formal verification methods enable us to analyse the software that is controlling these systems and to provide a proof of correctness that the software obeys its requirements. However, robotic systems tend not to be developed with verification in mind from the outset, which can often complicate the verification of the final algorithms and systems. In this poster, we describe the process that we used to verify a pre-existing system for autonomous grasping which is to be used for active debris removal in space. In particular, we formalise the requirements for this system using the Formal Requirements Elicitation Tool (FRET). We formally model specific software components of the system and formally verify that they adhere to their corresponding requirements using the Dafny program verifier. From the original FRET requirements, we synthesise runtime monitors using ROSMonitoring and show how these can provide runtime assurances for the system. We also describe our experimentation and analysis of the testbed and the associated simulation. We provide a detailed discussion of our approach and describe how the modularity of this particular autonomous system simplified the usually complex task of verifying a system post-development.

[Space Robotics Workshop] Integrating Formal Verification and Assurance: An Inspection Rover Case Study

ABSTRACT. The complexity and flexibility of autonomous robotic systems necessitates a range of distinct verification tools. This presents new challenges not only for design verification but also for assurance approaches. Combining the distinct formal verification tools, while maintaining sufficient formal coherence to provide compelling assurance evidence is difficult, often being abandoned for less formal approaches. In this poster we demonstrate, through a case study, how a variety of distinct formal techniques can be brought together in order to develop a justifiable assurance case. We use the AdvoCATE assurance case tool to guide our analyses and to integrate the artifacts from the formal methods that we use, namely: FRET, CoCoSim and Event-B. While we present our methodology as applied to a specific Inspection Rover case study, we believe that this combination provides benefits in maintaining coherent formal links across development and assurance processes for a wide range of autonomous robotic systems.