CESUN 2023: 9TH INTERNATIONAL ENGINEERING SYSTEMS SYMPOSIUM (CESUN 2023)
PROGRAM FOR MONDAY, NOVEMBER 6TH
Days:
next day
all days

View: session overviewtalk overview

07:45-08:45

Breakfast and Registration

All activities except reception and dinner will take place at: Norris University Center (1999 Campus Drive, Evanston, IL 60208)

09:00-10:30 Session 2

Panel 1: Designing Complex Systems for Uncertainty, Sustainability, and Resilience: The Case for Flexibility

Moderator: Michel Cardin (Associate Professor in Computational Aided Engineering at the Dyson School of Design Engineering, Imperial College London)

Panelists:

  • David Broniatowski (Associate Professor of Engineering Management & Systems Engineering, The George Washington University)
  • Michel-Alexandre Cardin (Associate Professor of Computational Aided Engineering, Dyson School of Design Engineering, Imperial College London)
  • Richard de Neufville (Professor of Engineering Systems, Institute for Data, Systems and Society, MIT)
  • Tina Comes (Professor of Technology, Policy and Management, TU Delft)
  • Zoe Szajnfarber (Professor of Engineering Management & Systems Engineering, The George Washington University)

 

10:30-11:00

Coffee Break

11:00-12:30 Session 3A

Systems and Modularity

11:00
Pushing the limits: Changing Organizational Structures in IFRC’s Humanitarian Response Operations
PRESENTER: Lauren Bateman

ABSTRACT. This paper seeks to identify the limits of the typical modular humanitarian response structure with modules based around services provided by technical sectors. We investigate this question by examining the evolution of the organizational structures of one humanitarian organization, the International Federation of Red Cross and Red Crescent Societies (IFRC), in 13 recent emergency responses. We analyze the changes made to the organizational structure in each case, to separate fundamental architectural changes from those easily accommodated by the existing structure. The nature of these architectural changes can shed light on the limits of the original structure by demonstrating when and why it needed to change. Therefore, we analyzed architectural changes inductively to understand their purposes and what prompted them. We find that the typical organization structure is flexible enough to respond to a wide variety of emergencies. However, in three cases, the structure changed in ways that integrated multiple sectors, in order to support population behavior change and the rich relationship with the affected population necessitated by this goal. These findings have implications for theory on service modularity and for humanitarian practice, particularly by spreading awareness of the limits of the standard organizational structure and the opportunities for structure change to enable adaptation to novel emergencies and localized needs.

11:18
Navigating Cost, Schedule, and Performance Trade-offs through Co-Design of Modules and Interfaces
PRESENTER: Taylan Topcu

ABSTRACT. Decomposition is a critical enabler of complex system development. Modern analytical approaches that focus on the modularization process predominantly adopt a sequential approach; first identify the optimal module characteristics and then explore how interfaces can be developed to manage the residual interdependencies between these modules. However, treating interfaces as a mere facilitator of modularization decisions may lead to inferior design alternatives. We contend that the preferred decomposition approach, and the resulting design process outcomes, depend on both the set of modules and the interfaces among them. To explore this idea, we built a simulated experiment of the design process, where we could vary both module selection and interface design. The modules varied in their functional grouping structure and total number of modules in the system, while the interfaces varied in the richness of their information transfer and implementation cost. We then investigated the joint impact of these decisions on design process outcomes in performance, cost, and schedule. We found that neither module nor interface design alone determine system-level outcomes as each modularization appeared on the cost-schedule-performance Pareto Frontier, but only with appropriate interface selection. This re-framing of decomposition as co-design of modules and interfaces creates novel opportunities for designers to navigate tradeoffs in cost, schedule and performance, since some interface archetypes prioritize each dimension.

11:36
Theory-Grounded Guidelines for Solver-Aware System Architecting (SASA)
PRESENTER: Zoe Szajnfarber

ABSTRACT. Solver-Aware System Architecting (SASA) is a new paradigm of system architecting that enables organizations to take advantage of talent and expertise from outside the organizational boundaries (e.g., gig workers, crowd). In contrast to traditional approaches to system architecting, which focus on uncertainty in the environment and assumes traditional experts bounded by organizational boundaries, the basic tenet of SASA is that the joint consideration of system architecture, diverse external solvers, and contract mechanisms, early in the design process, can significantly improve systems design outcomes. This presentation will highlight the approach and results from an ongoing study where the objective is to advance the scientific understanding of SASA by formalizing the linkages among innovation processes, designer knowledge, systems architecture, and contractual structures.

11:54
Multidisciplinary engineering coordination through collective artifacts
PRESENTER: John Meluso

ABSTRACT. Many models of engineering design processes assume that team members can share solutions and coordinate. However, these assumptions break down in multidisciplinary engineering teams where team members often complete distinct yet interrelated pieces of larger tasks. Such contexts (e.g. launch vehicle design, automotive manufacturing) make it difficult for engineers to separate the performance effects of their own design choices from the choices of interacting neighbors. Despite this, this work shows that engineers can overcome this challenge by coordinating with network neighbors through mediating artifacts (like collective performance assessments). When neighbors’ actions influence collective outcomes, engineering teams with different coordination networks perform relatively similarly to one another. However, varying a team’s network can affect performance on tasks that weight individuals’ contributions by network properties. Consequently, when individuals innovate (through “exploring” searches), dense networks hurt optimization slightly by increasing uncertainty. In contrast, dense networks moderately help design optimization when engineers refine their work (through “exploiting” searches) by efficiently finding local optima. This work also shows that decentralization improves system performance when assessing networks across a battery of 34 tasks with varied qualities and difficulties. These results offer new design principles for multidisciplinary engineering within which coordinating proves more difficult.

11:00-12:30 Session 3B

Systems and Networks

11:00
Platform-Driven Collaboration Patterns: Structural Evolution Over Time and Scale
PRESENTER: Negin Maddah

ABSTRACT. The digital age transformed collaboration and innovation, especially with the pandemic-induced shift to remote work. Digital platforms enable collaborative efforts redefining boundaries, facilitating interactions, and concurrent activities within shared domains. Our research examines how team size and collaboration stages affect patterns on digital platforms. Previous studies investigated network characteristics in organizations, finding relationships between an organization's size and its informal network traits. Our work extends this to digital platforms, particularly Wikipedia, an exemplary platform for understanding digital collaboration dynamics. Method- We used network analysis to study interactions among Wikipedia editors and focused on significant interactions. Networks were characterized using measures like size, average path length, clustering coefficient, and betweenness centrality to reveal hidden network patterns. Results- Traditional network models show constant/decreasing trends in clustering coefficients. However, our findings revealed increased clustering coefficients as network size grows, challenging prior human network studies. We discovered article topics influence collaboration patterns and larger editor networks for an article amplify sequential interactions. This enhances article content and forms a densely interconnected editing network. Initial results indicate early-stage networks in technology and business categories are broadly distributed and not intensely clustered. Some editors are central in early article development, serving as essential bridges among contributors. Our mature-stage network analysis continues. Conclusion- Our research offers insights into collaboration patterns on platforms like Wikipedia, presenting a new view on large-scale, platform-driven collaboration dynamics.

11:18
Design for Market Systems Using Network-Based Product Competition Analysis
PRESENTER: Yinshuang Xiao

ABSTRACT. A deep understanding of the factors that influence product competition is crucial to the design of products for market systems, thus beneficial for an enterprise to maintain its competitiveness in the market. However, conducting a competition analysis faces several challenges, such as limited customer survey data and the existence of market heterogeneity, making competition difficult to quantify. To address these issues, this study first introduces a survey design to ensure the collection of reliable data on customers' preferences. Second, we present a local network-based approach to competitiveness representation that supports product competition analysis for engineering design. Taking household vacuum cleaners as a case study, the proposed representation approach offers a novel method to quantify product competitiveness in a heterogeneous market.

11:36
Measuring the Emergent Informal Social Networks of Military Units with Varying Permanent Changes of Station: An Agent-Based Approach
PRESENTER: John Caddell

ABSTRACT. This study investigates the balance between the benefits and disruptions of the Permanent Change of Station (PCS) policy in the US Army, focusing on the emergence of informal social networks. Utilizing an agent-based simulation model, we analyzed personnel movements and association networks developed over careers, exploring the impacts of reducing PCS frequency. The hypothesis posited that reducing PCS frequency could potentially strengthen inter-unit relationships at the cost of wider network connections. The findings suggest that decreasing PCS frequency enhances inter-unit connectivity with minimal effects on the broader network, presenting marginal trade-offs in network level performance. Lower centrality at the brigade level was observable, indicating denser networks within units, while network-flow centrality remained relatively unchanged. This research provides insights into optimizing human resource policies for organizational performance by considering not only direct costs but also the dynamics of social network development within the military, highlighting the potential of incorporating social network analysis in strategic workforce management and manpower modeling.

11:54
Social Influence and Customer Preferences: A Case Study in the U.S. New Car Buyer Market
PRESENTER: Neelam Modi

ABSTRACT. Understanding customer preferences is crucial to designing successful products. To understand these preferences, product engineers often turn to customer preference models. However, traditional utility-based approaches tend to oversimplify the complex customer decision-making process. In reality, various factors such as social influence, product competition, and the multi-stage nature of decision-making can all play significant roles. To address these complexities, we adopt a multi-dimensional network perspective in which nodes represent either customers or products, and ties represent one of three types of relations: customer-to-customer, product-to-product, or customer-to-product.

In this particular study, we narrow our focus to customer-to-customer relations and explore the intricate role of social influence on different decision-making stages. While the connection between social influence and customer preferences is well-established, previous research has predominantly relied on synthetic networks. In contrast, we leverage empirical social network data obtained through a survey of new car buyers in the United States. We collected a wide range of information, including demographics, car choices, feature preferences, opinions, and usage contexts.

To analyze this data, we employ a series of autologistic actor-attribute models (ALAAMs), a robust tool for examining social influence within network data. Our preliminary findings highlight the significant impact of social influence, or "contagion," on customers' car feature preferences (e.g., vehicle reliability). Ultimately, our research aims to provide a more profound understanding of how social influence shapes customer decision-making, offering valuable insights to product designers striving to create disruptive products in the market.

13:45-15:15 Session 4A

Systems Engineering and Design

13:45
Intervention in Collaborative System Design for Mitigating Coordination Failure
PRESENTER: Alkim Avsar

ABSTRACT. Collaborative systems include multiple actors working together to achieve a complex goal that cannot be achievable by a single actor and possess operational and managerial interdependence. Collaborative systems problems have high levels of uncertainty due to self-interested actors and limited availability of information. Risk arises from both technical and social sources, making actor interactions critical. This paper introduces an intervention, the system mediator, to increase efficiencies in collaborative systems by ensuring that all actors have essential technical and social information. The paper includes an experimental study with a control group without the system mediator (24 participants) and a treatment group with the system mediator (28 participants). Each session includes 30 experimental tasks, with four participants and two pairs in each session, yielding 780 design task observations in total. Experimental tasks include a bi-level game with design decisions on the lower level and strategic decisions on the upper level with Stag-Hunt strategic dynamics. The analysis investigates if there is a significant difference between the control and treatment group’s results of successful collaboration, mutual independence, and coordination failure by three logistic regression models. Analysis results show that the presence of the system mediator statistically significantly increases successful collaboration rates and decreases coordination failure rates. Results also show that social closeness and task difficulty difference levels contribute to outcomes. The paper shows that an intervention in the design of the collaborative system by providing further technical and social information to the actors increases the overall efficiency of the system outcomes.

14:03
Quantify Project Rework to Improve Planning Efficiency in Project Scheduling
PRESENTER: Chenwei Gui

ABSTRACT. Rework poses significant challenges in project management by causing delays and complicating scheduling. To tackle this, we introduce a two-step simulation model that quantifies rework and accounts for human factors. In the first step, the project scope is decomposed into work breakdown structures, allowing for simplified management and reduced rework propagation. The second step involves dynamic resource allocation, considering both experience and availability of human resources. Combining these steps, the model produces an optimized schedule, visualized through a Gantt Chart, that factors in rework risks as well as time and cost constraints. Validation through simulations yielded more precise estimates for project parameters like makespan and resource utilization. The application of the model to a real-world system design project confirms its practical utility in achieving efficient project schedules.

14:21
Understanding the Tradeoffs Behind Using Margin for Changeability
PRESENTER: Aditya Singh

ABSTRACT. Complex engineered systems face increasingly protracted design cycles and lengthy operating cycles, leading to discrepancies in expected and actual operating conditions as environments change over time. To maintain value in the face of such change, systems engineering literature has often focused on designing for changeability, which encompasses design strategies and associated mechanisms that enable change. Literature has long assumed that future changes are known or can be readily predicted by system designers, but this assumption does not match reality. Often, systems must be modified in unexpected ways to maintain value, and design mechanisms such as real options and modularity fail to enable such changes since they require some knowledge of what will be changed in the future. To understand how unexpected changes are implemented, we inductively studied the C-130, a setting where one platform performed many missions, and close air support in Desert Storm, a setting where many platforms performed one mission, to understand how systems are changed to gain new capabilities. Through this examination, we find that margin and operational changes are key mechanisms for changeability that have been largely ignored by literature. Margin gives designers a ‘blank canvas’ to add to their systems in the future without having to remove or swap anything out. Operational changes are non-form changes driven by system users who change how their systems are used to gain new capabilities. This paper also studies the relationship between these mechanisms to shed more light on how changeability is achieved in the field.

14:39
A Requirements Verification Ontology Stack to Support Semantic Systems Engineering
PRESENTER: Joe Gregory

ABSTRACT. One area of digital engineering that has received significant attention in recent years is the digital thread. In particular, seamless traceability from requirements through models to simulation results is becoming an increasingly important aspect of this digitalization effort. In this paper, we present an approach to requirements verification that leverages semantic web technologies to support technical and data interoperability with regards to requirements, system architecture, test and evaluation. The Requirement Verification Ontology Stack (RVOS) has been developed to support this. The RVOS has been written in the Ontological Modeling Language (OML) and is an application of the University of Arizona (UA) Ontology Stack, which is built on the Basic Formal Ontology (BFO) and the Common Core Ontologies (CCO). The Violet tool is used to aggregate data from multiple engineering tools and is able to generate an OML graph representation of entire dataset. This dataset can then be validated against the RVOS and can be reasoned and queried upon by the user. This approach has been applied to a notional LEO spacecraft design project. System-level requirements, the physical architecture of the spacecraft, and test plans have been captured in different engineering tools. We use Violet to aggregate this data and generate an OML graph presentation of the dataset. We show that semantic web technologies, such as reasoning, validation and querying, have the potential to add value and efficiency to the verification process.

13:45-15:15 Session 4B

Artificial Intelligence in Systems

13:45
Towards Designing Empathetic and Trustworthy AI Chatbots: An Exploratory Study
PRESENTER: Ting Liao

ABSTRACT. Intelligent agents like chatbots have recently attracted enormous attention due to their advanced capability to augment humans in information gathering and task execution. While they are designed to be more understanding, the existing literature lacks deep knowledge of how chatbots should track and respond to users' emotions in real-time with empathy. In this exploratory study, we proposed an innovative emotion-detecting system that combines CNN-based facial expression recognition algorithms with text-based sentiment analysis to improve real-time interactions between users and an AI-powered chatbot by recognizing users' emotional expressions and delivering empathetic responses appropriately. We present preliminary results of a human-subject study with distinct versions of chatbots. We confirm that adding facial expression detection improves the predictability of the models of user-perceived trust and empathy.

14:03
AI Driven Governance of Complex Systems Through Network Intervention: A Hierarchical RL Approach with Graph Neural Network
PRESENTER: Qiliang Chen

ABSTRACT. The rapid advancement of networking technologies has driven the evolution of network systems to meet the increas- ing demands of users and businesses. However, managing and optimizing these complex and dynamic systems pose significant challenges because of the complicated network structure and flexible network dynamics. Traditional methods for network optimization are time-consuming, limited in capturing system complexity, and struggle to adapt to changing network dynamics. In this study, we present a novel approach called Hierarchical Graph Reinforcement Learning (HGRL) for network topology intervention. Our method demonstrates significant improvements in network intervention performance by efficiently manipulating links. Moreover, it offers valuable insights into understanding the relationships between network dynamics and topology evolution from behaviors of HGRL’s learned policy.

14:21
Can Requirements Engineering Be Used to Manage Systemic Bias in AI Systems?

ABSTRACT. The NIST AI Risk Management Framework (RMF) has identified mitigating systemic bias in AI systems as a key enabler of trustworthiness. In general, bias the RMF categorizes bias in AI as statistical, cognitive, and systemic, with the latter being especially hard to measure, and therefore manage, due to the fact that it pertains to institutionalized negative externalities affecting marginalized communities. Here, we argue that the systems engineering requirements elicitation process provides a useful framework for managing, and ultimately, mitigating, systemic bias in AI and other engineered systems. Specifically, the requirements elicitation process explicitly recognizes power relationships among stakeholders. Although traditional requirements elicitation techniques aim to elicit requirements from those who have high power and high interest, the absence of requirements reflecting the needs of low power stakeholders is one major source of systemic bias. Thus, explicit documentation of which stakeholders were included in the requirements elicitation process, and how decisions were made regarding whose requirements to elicit, can make the process of requirements elicitation more transparent and subject to review. This transparency can, in turn, be used to mitigate systemic bias in the design of engineered systems.

Three case studies are presented to illustrate the proposed approach: speed camera systems, facial recognition systems, and health utilization predictions. Each case demonstrates how explicit requirements engineering can uncover and mitigate systemic bias.

In conclusion, requirements engineering can make these biases explicit, aiding in their management and governance.

14:39
Generative Agent-Based Modeling: Simulating Social Dynamics With Large Language Models

ABSTRACT. Agent-based modeling (ABM) has long provided a valuable framework for understanding complex systems and emergent phenomena. However, traditional ABM approaches face limitations in capturing the intricate nature of human decision-making and the dynamic responses that shape lived experiences. This study puts forth a new solution: generative ABM (gABM). By leveraging large language models, gABM enables the creation of original, socially believable agents with fluid behaviors that closely resemble human interactions. Our results demonstrate the efficacy of gABM in producing computational proxies of human behavior, bridging the divide between individual agency and system-level outcomes. The application of gABM holds immense potential for simulating the nuanced dynamics of social interactions and complex societal phenomena across diverse domains. Overall, this work establishes gABM as a powerful new technique for modeling the emergent complexity of human systems, paving the way for enhanced ABM capabilities across the sciences.

14:57
All Models Fail, But Some Are Useful: Enabling Informed Decision-Making by Non-Expert Acquirers of AI-Embedded Systems
PRESENTER: Chris Krueger

ABSTRACT. Artificial Intelligence (AI) models are quickly becoming ubiquitous in systems throughout all sectors of society. Doing this often requires the project managers procure the AI model from outside the organization. Faced with the decision of which model to procure, a decision must be made based on limited information about how the model behaves. Standard information includes accuracy, precision, and recall of the models. There are two issues with this situation. First, by focusing on success, these metrics mask the risks associated with differences in the failure modes of the models. Second, the typical decision-maker is not an expert in AI and metrics designed for developers may be less effective in allowing project managers to gain necessary insights. To understand how to communicate efficiently such that non-AI experts can understand model behavior and make informed procurement decision, an experiment was conducted to test how different information formats provide foundational knowledge of model behavior. The results not only advance the conversation regarding explainable AI but also inform the acquisition branches in the defense sector.

15:15-16:15 Session 5

Posters and Coffee

Mission Engineering and Acquisition Modernization for Joint Al-Domain Command and Control
PRESENTER: Beatrice Lambert

ABSTRACT. Mission Engineering and Acquisition Modernization for Joint All-Domain Command and Control Bea Lambert1, Darryl Farber2, and Edward Pines3

The U.S. Department of Defense’s vision for the future of Joint All-Domain Command and Control (JADC2) of U.S. military and allied nations will require the interoperability among computer and information systems, and between hardware and software systems to a much greater extent than at present in 2023. To achieve interoperability that will enable the joint force to accomplish its missions requires advances in mission engineering and the modernization of acquisition processes. The goal of this research is to model the policy and strategy processes and to develop scenarios to inform decision makers of the risks, and trade-offs they face in defense systems strategic investments using a mission engineering framework.

A defense systems analysis that incorporates systems dynamics modeling, and case studies is presented. The case studies are the following: the Army’s Future Combat Systems and the Navy’s Freedom / Independence class littoral combat ship program. Vensim is used for simulations. All analysis is derived from and based upon public, open-source information.

Modernizing the acquisition process to meet the needs of U.S. forces and by extension interoperability with NATO allied forces has the potential of reducing the risk of cost overruns and system delivery delays. Identifying, analyzing, and developing risk management strategies will be an outcome of this research, an applied systems analysis, which is expected to contribute to DOD’s mission engineering success.

______________________________ 1 NMSU 2 Penn State University 3 NMSU

A Function Selection-Based Framework for Representing Extreme Novelty in Models of Design Processes

ABSTRACT. Increasingly, complex system design organizations are structuring their innovation processes to leverage contributions from non-traditional partners. Prior work demonstrated the potential to improve outcomes through Solver Aware Systems Architecting (SASA) - by matching the product’s architecture to the unique distribution of expertise available outside the domain/organization. To implement this a new modeling approach is needed that captures the potential for highly unconventional solutions upfront.

Common screening models used in the architecture phase are either parametric (based on past solutions) or high-level abstractions of underlying physics. While these are effective for their intended use (conceptual design within a traditional organization), they are limited to a set paradigm. Open Innovation (OI) which hinges on the potential to identify solutions that break performance curves by adopting novel mechanisms and solving approaches and it can’t be represented in either framework.

To overcome this, this poster presents an alternative modeling framework which uses a decision hierarchy and represents search as set of categorical choices through a functional tree of principles and embodiments instead of a continuous parameter space. The likelihood of identifying a high-quality solution varies by paradigm. These differences are represented as payout distributions in the roots of the tree.

To illustrate these ideas, we present preliminary results from a case study of the design of an autonomous robotic manipulator. We demonstrate the feasibility of representing the search space in these ways and the ability for this type of model to more fairly capture the potential for novelty and feasibility in highly unconventional approaches.

Embedding Physical Knowledge in Deep Neural Networks for Predicting Metamaterial Phononic Properties
PRESENTER: Hongyi Xu

ABSTRACT. Phononic metamaterials have the capability to manipulate the propagation of mechanical waves. This paper presents two physics-embedded deep convolutional neural networks to predict the phonon dispersion curves of 2D metamaterials: (1) a transfer learning-based convolutional neural network (TLCNN) and (2) a physics-guided convolutional neural network (PGCNN). The physics knowledge is embedded into the two proposed models by modifying the loss function of the convolutional neural network (CNN). A comparative study among CNN, TLCNN and PGCNN is conducted to understand the relative merits. It is demonstrated that the proposed TLCNN and PGCNN have the potential to improve prediction accuracy with a limited amount of input data.

Disaster Information processing and extraction using ChatGPT
PRESENTER: Zaid Kbah

ABSTRACT. information management officers play an important role during disasters. Their responsibilities encompass processing and analyzing the influx of unstructured information, in free text format, from various sources and subsequent dissemination of relevant information to stakeholders aiming to support the decision-making process. This job requires considerable time and effort. The emergence of advanced artificial intelligence models, such as ChatGPT, has the potential to transform logistics information extraction tasks by expediting and automating information extraction processes and, conceivably, substituting humans in certain aspects of the information management workflow. This research endeavors to undertake an examination and evaluation of ChatGPT's performance vis-à-vis human capabilities in the context of extracting logistics and infrastructure-related information during disaster events, thus shedding light on the extent to which artificial intelligence technologies can parallel, or even surpass, human proficiency in this domain.

A Proposal of Multi-agent Modeling Approach in SPS Line Simulation Considering Human Centered Design
PRESENTER: Lei Shen

ABSTRACT. Manufacturing simulation is useful for the efficient operation of custom manufacturing line (SPS line), but at this point, methods have not yet been systematized for the construction of human-centered simulation models. The current ergonomics has not yet developed standard in considering worker well-beings-oriented manufacturing, thus this research aims to support human-centered approach in production systems. In addition, with the advancement of digital twin technology, there is a growing demand for digitization and modeling of humans. Therefore, this study proposes the concept of modeling and simulation of workers in the SPS line work environment based on these ideas.

THE UTILITY OF AI AND ML TOOLS TO DESIGN SUSTAINABLE PRODUCT AND PRODUCTION SYSTEMS: REVIEW, APPLICATIONS AND LIMITATIONS
PRESENTER: Harrison Kim

ABSTRACT. The momentum of AI-based tools and ML algorithms (i.e., in terms of their recent development, high performance, and increased adoption) presents both new opportunities and challenges to engineers and designers in various tasks (e.g., data fetching, automation, etc.). In the meantime, the actual use of the newly developed AI and ML capabilities to support the sustainable design of product-service systems remains unexplored. In this line, this research work aims to provide concrete and practical examples to the following research question: To what extent the current state of AI and ML artifacts could be leveraged to enhance the sustainability of products during the design and development phases? To do so, combining a state-of-the-art literature survey with laboratory experimentations and case studies, the actual contribution and potential utility of AI/ML-based techniques are positioned within the product design and development process. For instance, deep learning-based CV can be used to determine the wear state of products. In that case, it has been shown that CV can be used to extract information from non-smart connected products and enable smart product-service systems that can contribute to sustainability in manifold ways: redesign, process optimization, remanufacturing. On the other hand, NLP algorithms can help automate the analysis of online customer product reviews to identify (un)sustainable use patterns and generate sustainable design leads. Last but not least, ML has recently been used to develop surrogate LCAs, which have enabled the prediction of future products’ life cycle environmental impacts based on design-phase product characteristics.

An agent-based model for energy justice and technology adoption in the residential building sector

ABSTRACT. Energy justice continues to emerge as a policy objective across the United States. As such, methods and models to account for both the technical and social contributions of changes in the energy system are necessary. This research presents an agent-based model for technology adoption in the residential building sector. By incorporating community characteristics, technology capabilities, and technology perceptions, the model offers a fundamental tool for considering how micro-level actions affect outcomes in the just energy transition.

Investigation of External Shock on Science System: A Case Study on German Science System
PRESENTER: Huaxia Zhou

ABSTRACT. Scientific knowledge has transformed society and the economy in significant ways, and science policy has played a crucial role in shaping the trajectory of scientific knowledge. To better understand the relationship between policy change and scientific outcomes, we take advantage of the natural experiment that occurred in Germany. The German science system underwent a process of first separation and then reunification, providing a unique opportunity to evaluate the impact of external shock on scientific development. Despite the importance of this historical event, little quantitative research has approached Germany's policy transformation of the scientific system from a whole to two divided regions, and then back to being whole again. Using a large-scale repository of publication metadata, we extracted 2 million publications authored by 1.5 million German scholars to examine the external shock on German research output, the shift of German research focus, and the collaboration structure of German scholars. We found that East German scholars narrow the difference in publication volume to their West counterparts shortly after the reunification while the gap still exists, and the discipline research focus of both East and West Germany aligned after the reunification. Collaboration between East and West German scientific communities involves more scholars and keeps increasing after the reunification. While these findings have limitations due to the scope of the publication metadata, they provide insights into the impact of policy change on scientific development. Future research can be built on these findings to better understand the relationship between policy and science and guide the future of scientific development.

Desertification: An Agent-Based Model
PRESENTER: Christine King

ABSTRACT. The topic and adverse effects of desertification are widely known, and being able to identify the desertification process is important in combating and reversing its effects. Through the modeling software NetLogo, we would be able to replicate a simplistic system of desertification and methods of combating it via an agent-based model. While there are many different models in the NetLogo library, the topic of desertification is overlooked. In this gap in research, we have taken the liberty to create a base model to show a simplistic design of desertification. The main aspect of the model is to show the impact of vegetation on soil degradation. Currently, the model works on a fundamental level. The model allows for adjustment of the starting amount of plants, which affects the soil moisture; displayed by the darker brown color. Additionally, each plant has a reproduction rate; if there is enough moisture, the plant has a chance to reproduce. After a certain amount of time, the plant will die and the soil around it will be affected. The model is a race against time to see if the plants will be able to maintain the moisture in the soil before dying out. This is but the first step in creating an agent-based model that shows the complexity behind desertification, as there are still many things that can be incorporated to further advance the relationships between the different important forces in desertification.

Preference Detection Harnessing Low-Cost Portable Electroencephalography and Facial Behavior Markers
PRESENTER: Sang Won Bae

ABSTRACT. Delivering personalized recommendations can improve the effectiveness of user satisfaction. To do this, understanding user preference is critical to developing such recommender systems, however, existing studies mainly utilize high-cost devices and high computation in detecting preference. In this work, we propose a multimodal framework in which facial expressions and neural signals are captured by low-cost portable electroencephalography (EEG) devices in identifying a user’s preference. We found that EEG combined with facial behavior features improves the preference detection, specifically whether a user likes or dislikes the given face images in controlled experiments. Further, we introduce a richer set of objective markers leveraging EEG-based neural features and facial behavior markers that contribute to preference detection. We demonstrate the multimodal-based preference detection using the commercialized portable EEG which can provide an efficient way to approach a user's preference detection in designing personalized recommendation systems in real-world settings.

Revolutionizing Computer-Aided Design Systems: CAD Sequence Inference from Product Image
PRESENTER: Xingang Li

ABSTRACT. Computer-aided design (CAD) systems are crucial for streamlining product development in the design and engineering processes. Contemporary CAD systems, such as Fusion 360 and SOLIDWORKS, enable designers to create and modify CAD models through a sequence of CAD operations. A CAD sequence can result in a 3D CAD model, which can also enable flexibility in modifying steps and facilitate a better understanding of the historical 3D modeling process. In specific scenarios, the CAD model of a product may not be readily available due to various reasons, including outdated documentation and the lack of digital records. Reverse engineering (RE) is employed to overcome these obstacles, utilizing measurement and analysis tools to reconstruct CAD models. Integrating RE with CAD systems allows designers to leverage the advantages of existing products while incorporating their own innovative ideas and improvements. However, current RE techniques require sophisticated tools to obtain the point clouds of the product and can only generate 3D models without providing a CAD sequence to facilitate a more flexible design. To that end, we pioneer the direction of generating a CAD sequence based on a single image of a product. The proposed method achieved a high level of prediction accuracy based on a synthesized dataset, which shows the potential to be integrated with existing CAD systems to make the RE process more accessible to a wider audience, promoting design collaboration and fostering design democratization.

Fuzzy Associative Memory and Deep Learning Network Model Interface with Transplant Surgeon in Assessing Hard-to-Place Kidneys for Use in Digital Twin Model
PRESENTER: Rachel Dzieran

ABSTRACT. In this research, a conceptual model is presented that will be integrated into an Artificial Intelligence (AI) enabled decision-making tool being developed to facilitate transplant surgeon assessment of an existing deep learning model, including consideration for individualized surgeon practices and assessments. AI machine learning for healthcare decision making has not yet been widely adopted or accepted. The organ procurement decision-making process is complex, involving a transdisciplinary approach. Kidneys identified as hard-to-place can further complicate the evaluation process when determining donor acceptance and is motivation for an AI enabled decision-making tool. The conceptual model includes level of trust experienced by the transplant surgeon based on outcomes from design of experiment results. The intent is to provide another parallel model through fuzzy associative memory that captures individual surgeon expertise. It is anticipated that in the decision-making process, the transplant surgeon will have two models available for use when making placement decisions of a given kidney case. The plan is to set up and utilize an interface for transplant surgeon interaction with the AI machine learning model to enable testing of the conceptual model.

A game theoretic agent-based framework for distributed space systems planning
PRESENTER: Qian Shi

ABSTRACT. The design, operation, and maintenance of distributed space systems (e.g., satellite constellations) at a global scale presents a system-of-systems problem that involves many managerially and operationally independent stakeholders. While these systems often operate competitively due to commercial and/or national interests, ensuring their safe operation and avoiding collisions is a shared concern for all asset owners. In this work, we propose a game theoretic framework and to model and analyze the interesting dynamics involving competition and cooperation. We demonstrate the use of this framework with a techno-economic model on a small scale sequential game involving two agents and potential customers. We also explain and illustrate how future work can build a game theoretic agent-based toolbox to support mechanism design to optimize satellite constellation design and planning at a global level.

How Should Technical Measures Be Selected? An Investigation Into Published Guidance
PRESENTER: Casey Eaton

ABSTRACT. Technical measures are used as a basis for decision making in large scale, complex systems design. In this use, technical measures inherently impact a systems’ design by both restricting and molding a decision space. Consequently, which technical measures a systems designer selects is important. This research answers the question: “What guidance is published on how to select technical measures for the design of large scale, complex engineered systems?” A systematic review is used to identify all sources on technical measure selection that satisfy predetermined inclusion criteria. Over 2,000 guidance statements for selection of technical measures are extracted from over 70 sources identified through the systematic review. The guidance statements are analyzed via content analysis, identifying five types of guidance: 1) the qualities technical measures should exhibit, 2) the quantity of technical measures that should be selected, 3) where technical measures should be derived from, 4) the timing of the selection of technical measures, and 5) guidance using examples of technical measures. Current guidance most often focuses on general qualities that measures should exhibit (such as specific or measurable). Little guidance is observed on how to assess if a measure exhibits such a quality. The guidance leaves the selection of a critical piece of systems design largely up to the best judgment of the systems engineer. Many sets of technical measures could potentially satisfy these types of guidance statements while potentially directing the design differently. Future research will investigate differing sets of technical measures and their impacts on the decision space.

Developing a digital mission engineering framework: Bridging the gap between current practices and digital-enabled mission engineering solutions
PRESENTER: Dalia Bekdache

ABSTRACT. The defense and aerospace industries are undergoing rapid digital transformation, highlighting the need for a standardized and digital-enabled mission engineering framework. Identified gaps in current mission engineering practices include vague guidance and underutilization of integrated digital tools. To address these gaps, the authors have dedicated previous efforts towards developing a comprehensive mission engineering methodology that effectively utilizes digital tools to generate model-based systems engineering (MBSE) artifacts. The framework synthesizes best practices of mission engineering from multiple existing sources that would help identify the digital MBSE artifacts that can be generated throughout the process.

Recognizing the urgency to catch up with advancing digital technologies, this research aims to further refine and extend the framework by creating a more detailed digital mission engineering framework that accounts for mission engineering elements. To test the efficacy of the developed framework, a mission scenario centered around active debris remediation is constructed and compared against a similar mission that was not built using the digital mission engineering framework. The findings inform the ongoing digital transformation efforts in the defense and aerospace sectors, offering valuable insights into the real-world application of DE and MBSE within ME. Examples of real-world applications are included and discussed as use-cases for continued research efforts. The findings reveal that the framework enables engineers and stakeholders to benefit from standardized practices, enhanced collaboration, and improved decision-making. It marks a crucial advancement in fully leveraging digital engineering in mission engineering, poised to transform defense and aerospace industries

Integrating Aerospace Structural Mechanics Problems into System-of-Systems Engineering Education: Fostering Interdisciplinary Learning
PRESENTER: Waterloo Tsutsui

ABSTRACT. The primary focus of this research is to explore the integration of aerospace structural mechanics problems into System-of-Systems (SoS) engineering education, emphasizing the benefits of interdisciplinary approaches in engineering pedagogy. While initially distinct, SoS engineering and structural mechanics exhibit similarities in analyzing complex systems and structures, encompassing factors such as material selection, manufacturing processes, and performance optimization. In both fields, mathematical models and computational tools are employed to comprehend system behavior based on various inputs. Despite their shared objectives, SoS engineering and structural mechanics differ in focus and approach. While SoS engineering focuses on designing and managing complex systems throughout their lifecycle, incorporating technical and non-technical considerations, structural mechanics is grounded in physics, studying the impact of forces and motion on deformable bodies.

This presentation demonstrates practical examples and strategies for integrating aerospace structural mechanics problems into SoS engineering education, highlighting the outcomes and benefits of this interdisciplinary approach and enabling educators to adopt an SoS-inspired framework within their teaching. By incorporating aerospace structural mechanics problems with SoS principles, students better understand how structural mechanics align with broader complex systems, thereby enhancing student learning, fostering critical thinking, and nurturing a holistic comprehension of complex engineering systems. This interdisciplinary approach equips students with comprehensive perspectives and strengthens their capacity to tackle real-world engineering challenges, filling a crucial gap often observed in traditional university education. This presentation will be valuable to engineering educators, curriculum designers, and researchers interested in innovative approaches to integrate diverse disciplines in effective engineering education.

Using Ensemble Explanations to Model and Understand System Interactions

ABSTRACT. Explainable AI (XAI) are a class of post-hoc analysis tools, of which there are a portfolio of algorithms, that were solely created to increase the transparency in the functioning of an ML model and provide interpretations of the data. Unfortunately, while the options are abundant, the choice XAI algorithms is still an arduous process and their performance has no discernible evaluation metric. To bridge this gap, in this research we propose use of ensembling XAI methods where multiple XAI methods, each with their own benefits, are used to assess a machine learning model and its interpretations of the system. Further we assess the consensus or lack thereof among the interpretations produced as a way to evaluate and draw meaningful conclusions about the system interactions. Consensus of XAI outputs contributes towards a higher confidence in the interpretations being made. Ensembling diverse XAI methods also removes the restriction of individual explanation models; such as limited dimensionality, model-specificity, locality of explanations. In this research, we investigate the data from NASA's Commercial Modular Aero-Propulsion System Simulation (CMAPSS) to understand system variables and interactions that result in jet engine degradation. Our approach uses an ML model to map the data to forecast the remaining useful life of the engine. Using the ensemble explainability framework, we then identify and quantify the influential system variables towards engine degradation. The framework also allows us to determine interactions among the variables with a higher confidence.

Category theory concepts for systems engineering
PRESENTER: David Perner

ABSTRACT. Systems engineering has relied heavily on heuristics in its approaches. This reliance is likely due to systems engineering lacking the same theoretical foundations that underpin other engineering disciplines. The lack of a theoretical foundation in systems engineering has limited systems engineering in its ability to justify its approaches, explain failures, and suggest improvements. This poster proposes category theory, a branch of mathematics specializing in conceptual unification, as a potential theoretical foundation for systems engineering. Definitions of the term system drawing on a number of sources will be synthesized and then expressed using concepts from category theory. Simple proofs will be developed for how these category theory expressions obey the requirements of a category. A definition for the term systems engineering will be synthesized from multiple sources and expressed using category theory concepts. Potential connections from the category theory expression of systems engineering to other systems engineering concepts will be explored. Finally, necessary elements for a foundational theory identified in past research will be examined and compared to the developed expressions.

Heuristics for Solver Aware Systems Architecting (SASA): A Reinforcement Learning Approach
PRESENTER: Vikranth S. Gadi

ABSTRACT. The assignment of solvers to complex system design problems plays a crucial role in achieving innovative solutions. While previous research has highlighted the limitations of relying solely on domain experts for solving such problems, the exploration of alternative solver types, including novices and specialists from adjacent domains, and assignment based on the architecture called Solver-Aware System Architecting (SASA) has shown promising results.

The complexity and diverse nature of the system design problems necessitate the development of efficient heuristics to guide solver selection. However, due to the vast number of possible combinations of problem-solver pairs, devising effective heuristics becomes a challenging task. To address this challenge, we propose a machine-learning framework based on tabular reinforcement learning, an alternative approach to the existing multi-armed bandit (MAB) formulation utilized in our previous work. The tabular reinforcement learning framework allows for the systematic generation of rich set of heuristics, enabling better solver assignment decisions in complex system design.

To evaluate the effectiveness of this approach, we employ an idealized problem domain: the game of golf. Drawing parallels between the golf problem and complex system, such as problem decomposition and reliance on diverse solvers, we highlight the utility of reinforcement learning in generating a comprehensive set of heuristics for solver assignment. The application of reinforcement learning for solver assignment heuristics in complex system design holds significant promise and by expanding upon the insights gained from the golf problem, we anticipate the extension of this approach to tackle real-world systems design problems in the future.

MicroFlow: Advancing Affective States Detection in Learning through Micro-expressions

ABSTRACT. Gaining a deep understanding of student engagement is essential for designing effective learning experiences. In this study, we proposed the MicroFlow framework inspired by the concept of micro-expressions, to advance detecting learners’ affective states in learning. We collected data from 19 students (54 sessions) during Python programming. We found that microexpression features, Inter Vector Angles (IVA) combined models demonstrated the highest performance in detecting anxiety and flow state. The AUC for flow state improved by 10% (reaching 84%) compared to the AU model. For anxiety and boredom, we achieved AUC values of 71% and 70%, respectively. We highlighted the feasibility of our framework as a cost-effective tool that enable educators to create a more engaging learning environment by adjusting the complexity level of learners tasks, ultimately improve learning outcomes.

A Multi-Tiered Methodology for Systems Engineering Graduate Research

ABSTRACT. The thesis and dissertation writing process for any Master’s or Graduate student internationally is heavily influenced on each individual’s advisor or funding source. Throughout their research development process, students might work on multiple projects, contracts, or case studies to fund their degree coursework. To efficiently integrate disparate projects from multiple advisors and funding sources into a thesis or dissertation project, Systems Engineering students should consider a multi-tiered methodology wrapping their stakeholders’ projects into tangible results for overarching research questions driving investigation. By doing so, a student can meet all stakeholder needs and conduct simultaneous development on their degree progress, thereby reducing their degree completion time and school financing requirements. This poster discusses a multi-tier methodological approach successfully implemented in the author’s PhD dissertation defense for Systems & Industrial Engineering at the University of Arizona. At the first tier of the multi-tiered methodology is the overarching concept of emergence, closely intertwined with the philosophy of holistic systems engineering. On the second tier are independent research activities organized by case studies and formalized Case Study Methodology and iterative Participatory Action Research. On the third tier is the Object-Oriented Systems Engineering Methodology for SysML. The resulting dissertation can integrate information from each SysML case study and report new emergent knowledge from the combination of the case study results. Normalizing an OOSEM-based MBSE benefit analysis methodology creates a path forward for students working on their dissertation and enables the development of a researcher’s own model ecosystem.

Assessment of mobility decarbonization with low-carbon policies and EV incentives in the US
PRESENTER: Weijie Pan

ABSTRACT. This study investigates the impact of state-level low-carbon energy policies and electric vehicle (EV) incentives on technology choices and EV adoption across the United States. The extent to which inequities in state-level strategies affect national-level decarbonization and long-term planning for power generation capacity expansion during sustainable transitions remains unknown. To address this knowledge gap, a greenhouse gas (GHG) emission-oriented scenario generation method is integrated into the Global Change Analysis Model (GCAM-USA), a climate-economy modeling platform. This study reveals the following findings. First, while EV incentives alone are the most efficient for facilitating EV adoption, the combination of this incentive with carbon tax policies even proves more effective at reducing GHG emissions. Second, wind energy and carbon capture and storage (CCS) technologies exhibit substantial potential as energy suppliers for future EV charging infrastructures. Moreover, the analysis highlights differences in state-level electricity demand, emphasizing the importance of investigating inter-state collaborative energy strategies. Such strategies are crucial for reconciling local conditional variances and facilitating the entire decarbonization process in the U.S. Overall, the insights gained from this study can help raise awareness among both federal and state-level policymakers regarding the significance of tailored state strategies for decarbonization. It can further strive to harmonize national-level progress in achieving climate goals.

Can the Inflation Reduction Act Inflate the Effectiveness of Electric Vehicle Incentives?
PRESENTER: Tianye Wang

ABSTRACT. One of the salient aspects of the Inflation Reduction Act (IRA) enacted in 2022 is to empower actions aimed at ameliorating the detrimental impacts of climate change. While contemporary research has explored the relationships between different climate change policies and CO2 emissions, less is known about the practical effectiveness of an integrated enactment. This study uses the Global Change Assessment Model (GCAM) under different established scenarios to study the projected effectiveness of major provisions of the IRA, such as tax credits for hydrogen, clean energy, and three levels of Electric Vehicles (EV) incentives. Our results show that the clean fuel energy tax credit is the most effective tool to reduce carbon emissions from conventional sources by up to 20% of the 2005 level. Surprisingly, there is no appreciable impact of hydrogen tax credits on CO2 emissions. The analysis also shows that all three levels of EV incentives can achieve more than 10% reduction of the 2005 level. Thus, this analysis demonstrates that EV incentives have the potential to expand reductions even beyond current modeling projections despite the uncertainties surrounding their implementation at the state level.

A State-based Probabilistic Risk Assessment Framework for System-of-Systems Operations
PRESENTER: Sonali Sinha Roy

ABSTRACT. A system of systems or SoS is a collection of systems that can be operated or managed independently while serving a common goal. The individual failure modes of the constituent systems coupled with the interdependencies among them can result in a vast variety of risks that can affect the operations of the SoS. Traditional risk assessment techniques are often inadequate for SoS applications. Therefore, a state-based framework is proposed for the probabilistic risk assessment of systems of systems. This framework leverages Harel statecharts to model the operations of individual systems within the SoS. Each system is decomposed into several operational states; the failure modes are characterized by their probability of occurrence and the primary consequence. The system-level statecharts are contained within an SoS-level model that connects them through logical and temporal operators to simulate functional dependencies among the systems. The SoS-level model can be used for statistical analysis through Monte Carlo simulations. The double-layer (system-level and SoS-level) model allows us to relate system-level risks to SoS-level performance metrics and analyze the sensitivity of these metrics to uncertainties in system-level operations. By creating different SoS-level models, a variety of operational concepts or architectures can be tested and compared. Overall, this framework is capable of generating a holistic risk profile of the SoS operations, thereby providing deeper and richer insights. Our framework has been demonstrated on one part of the Mars Sample Return program. It can also be applied to a variety of problems including national air transportation and defense systems.

16:15-17:15 Session 6

Keynote I: Optimizing Production for New Energy Products

Jason Crusan (VP New Energy Solutions, Woodside Energy)

 

17:30-20:30

Reception and Dinner at James L. Allen Center

Address: 2169 Campus Drive, Evanston, IL, 60208

Cocktail hour: EMP 24 25 Lounge 1st Fl North

Dinner: Atrium Dining Room