SEAA 2022: EUROMICRO CONFERENCE ON SOFTWARE ENGINEERING AND ADVANCED APPLICATIONS 2022
PROGRAM FOR THURSDAY, SEPTEMBER 1ST
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 7: [Keynote] Prof. Helena Holmström Olsson

Title: From Traditional to Digital: How software, data and AI are transforming the embedded systems industry

Abstract: With digitalization and with technologies such as software, data, and artificial intelligence, companies in the embedded systems domain are experiencing a rapid transformation of their conventional businesses. While the physical products and associated product sales provide the core revenue, these are increasingly being complemented with service offerings, new data-driven services, and digital products that allow for continuous value creation and delivery to customers. This talk explores the difference between what constitutes a traditional and a digital company and details the typical evolution path embedded systems companies take when transitioning towards becoming digital companies. The talk focuses on the changes associated with business models, ways-of-working and ecosystem engagements and provides concrete examples based on action-oriented research conducted in close collaboration with companies in the embedded systems domain.

10:00-10:30Coffee Break

Expomeloneras's Hall

10:30-12:00 Session 9A: ML Reviews and Deep Learning (DAIDE & AI4DevOps)
10:30
Maintainability Challenges in ML: A Systematic Literature Review

ABSTRACT. Background: As Machine Learning (ML) advances rapidly in many fields, it is being adopted by academics and businesses alike. However, ML has a number of different challenges in terms of maintenance not found in traditional software projects. Identifying what causes these maintainability challenges can help mitigate them early and continue delivering value in the long run without degrading ML performance. Aim: This study aims to identify and synthesise the maintainability challenges in different stages of the ML workflow and understand how these stages are interdependent and impact each other’s maintainability. Method: Using a systematic literature review, we screened more than 13000 papers, and then selected and qualitatively analysed 56 of them. Results: (i) a catalogue of maintainability challenges in different stages of Data Engineering, Model Engineering workflows and the current challenges when building ML systems are discussed;(ii) a map of 13 maintainability challenges to different interdependent stages of ML that impact the overall workflow;(iii) Provided insights to developers of ML tools and researchers. Conclusions: In this study, practitioners and organisations will learn about maintainability challenges and their impact at different stages of ML workflow. This will enable them to avoid pitfalls and help to build a maintainable ML system. The implications and challenges will also serve as a basis for future research to strengthen our understanding of the ML system’s maintainability.

10:55
Deep Reinforcement Learning in a Dynamic Environment: A Case Study in the Telecommunication Industry

ABSTRACT. Reinforcement learning, particularly deep reinforcement learning, has made remarkable progress in recent years and is now used not only in simulators and games but is also making its way into embedded systems as another software-intensive domain. However, when implemented in a real-world context, reinforcement learning is typically shown to be fragile and incapable of adapting to dynamic environments. In this paper, we provide a novel dynamic reinforcement learning algorithm for adapting to complex industrial situations. We apply and validate our approach using a telecommunications use case. The proposed algorithm can dynamically adjust the position and antenna tilt of a drone-based base station to maintain reliable wireless connectivity for mission-critical users. When compared to traditional reinforcement learning approaches, the dynamic reinforcement learning algorithm improves the overall service performance of a drone-based base station by roughly 20%. Our results demonstrate that the algorithm can quickly evolve and continuously adapt to the complex dynamic industrial environment.

11:20
Comparing Input Prioritization Techniques for Testing Deep Learning Algorithms

ABSTRACT. Deep learning (DL) systems are becoming an essential concern of software systems, so it is necessary to test them thoroughly. This is a challenging task since the test sets can grow over time as the new data is being acquired, and it becomes time-consuming. Input prioritization is necessary to reduce the testing time since prioritized test inputs are more likely to reveal the erroneous behavior of a DL system earlier during test execution. This study compares different input prioritization techniques regarding their effectiveness and efficiency. This work considers surprise adequacy, autoencoder-based, and similarity-based input prioritization approaches in the example of testing a DL image classification algorithms applied on MNIST, Fashion-MNIST, CIFAR-10, and STL-10 datasets. To measure effectiveness and efficiency, we use a modified APFD (Average Percentage of Fault Detected), and set up \& execution time, respectively. We observe that the surprise adequacy is the most effective (0.785 to 0.914 APFD). The autoencoder-based and similarity-based techniques are less effective, with the performance from 0.532 to 0.744 APFD and 0.579 to 0.709 APFD, respectively. In contrast, the similarity-based and surprise adequacy-based approaches are the most and least efficient, respectively. The findings in this work demonstrate the trade-off between the considered input prioritization techniques to understanding their practical applicability for testing DL algorithms.

11:45
A Multivocal Literature Review of MLOps Tools and Features

ABSTRACT. DevOps has become increasingly widespread, with companies employing its methods in different fields. In this context, MLOps automates Machine Learning workflows by applying DevOps practices. Considering the high number of tools available and the high interest of the practitioners to be supported by tools to automate the steps of Machine Learning pipelines, little is known concerning MLOps tools and their functionalities. To this aim, we conducted a Multivocal Literature Review (MLR) to (i) extract tools that allow for and support the creation of MLOps pipelines and (ii) analyze their main characteristics and features to provide a comprehensive overview of their value. Overall, we investigate the functionalities of 13 MLOps Tools. Our results show that most MLOps Tools support the same features but apply different approaches that can bring different advantages, depending on user requirements.

10:30-12:00 Session 9B: Effort Estimation 2 (SM)
10:30
Analyzing Programming Effort Model Accuracy of High-Level Parallel Programs for Stream Processing

ABSTRACT. Over the years, several Parallel Programming Models (PPMs) have supported the abstraction of programming complexity for parallel computer systems. However, few studies aim to evaluate the productivity reached by such abstractions since this is a complex task that involves human beings. There are several studies to develop predictive methods to estimate the effort required to program applications in software engineering. In order to evaluate the reliability of such metrics, it is necessary to assess the accuracy in different programming domains. In this work, we used the data of an experiment conducted with beginners in parallel programming to determine the effort required for implementing stream parallelism using FastFlow, SPar, and TBB. Our results show that some traditional software effort estimation models, such as COCOMO II, fall short, while Putnam's model could be an alternative for high-level PPMs evaluation. To overcome the limitations of existing models, we plan to create a parallelism-aware model to evaluate applications in this domain in future work.

10:45
Effort Prediction with Limited Data: A Case for Data Warehouse Projects

ABSTRACT. Organizations may create a sustainable competitive advantage against competitors by using data warehouse systems with which they can assess the current status of their operations at any moment. They can analyze trends and connections using up-to-date data. However, data warehouse projects tend to fail more often than other projects as it can be tough to estimate the effort required to build a data warehouse system. Functional size measurement is one of the methods used as an input for estimating the amount of work in a software project. In this study, we formed a measurement basis for DWH projects in an organization based on the COSMIC Functional Size Measurement Method. We mapped COSMIC rules on two different architectures used for DWH projects in the organization and measured the size of the projects. We calculated the productivity of the projects and compared them with the organization’s previous projects and DWH projects in the ISBSG repository. We could not create an organization-wide effort estimation model as we had a limited number of projects. As an alternative, we evaluated the success of effort estimation using DWH projects in the ISBSG repository. We also reported the challenges we faced during the size measurement process

11:10
Utilization of Three Software Size Measures for Effort Estimation in Agile World: A Case Study

ABSTRACT. Functional size measurement (FSM) methods, by being systematic and repeatable, are beneficial in the early phases of the software life cycle for core project management activities such as effort, cost, and schedule estimation. However, in agile projects, requirements are kept minimal in the early phases and are detailed over time as the project progresses. This situation makes it challenging to identify measurement components of FSM methods from requirements in the early phases, hence complicates applying FSM in agile projects. In addition, the existing FSM methods are not fully compatible with today's architectural styles, which are evolving into event-driven decentralized structures. In this study, we present the results of a case study to compare the effectiveness of different size measures: functional -COSMIC Function Points (CFP)-, event-based – Event Points-, and code length-based - Line of Code (LOC)- on projects that were developed with agile methods and utilized a microservice-based architecture. For this purpose, we measured the size of the project and created effort estimation models based on three methods. It is found that the event-based method estimated effort with better accuracy than the CFP and LOC-based methods.

10:30-12:00 Session 9C: MDE (MDEML)
10:30
Web-Based Tracing for Model-Driven Applications

ABSTRACT. Logging still is a core functionality used to understand the behavior of programs and executable models. Yet, modeling languages rarely consider logging as a first-level activity that is manifested in the language through modeling elements or their behavior. When logging is part of the code generated for the respective models or the corresponding runtime environment only, it must be generic, as the modeler cannot influence, through the models, what and when logging takes place. To enable modelers to log model behavior, we devised a method based on language extension and smart code generation that can integrate logging into arbitrary textual modeling languages. Based on this method, log entries can be produced, traced, and presented through a web application. This method and its infrastructure can facilitate lifting logging to the model level and, hence, improve the understanding of executable models.

10:55
Handling Environmental Uncertainty in Design Time Access Control Analysis

ABSTRACT. The high complexity, connectivity, and data exchange of modern software systems make it crucial to consider confidentiality early. An often used mechanism to ensure confidentiality is access control. When the system is modeled during design time, access control can already be analyzed. This enables early identification of confidentiality violations and the ability to analyze the impact of what-if scenarios. However, due to the abstract view of the design time model and the ambiguity in the early stages of development, uncertainties exist in the system environment. These uncertainties can have a direct effect on the validity of access control attributes in use, which might result in compromised confidentiality.

To handle such known uncertainty, we present a notion of confidence in the context of design time access control. We define confidence as a composition of known uncertainties in the environment of the system, which influence the validity of access control attributes. We extend an existing modeling and analysis approach for design time access control with our notion of confidence. For evaluation, we apply the notion of confidence to multiple real-world case studies and discuss the resulting benefits for different stages of system development. We also analyze the expressiveness of the extended approach in defining confidentiality constraints and measure the accuracy in identifying confidentiality violations. Our results show that using the notion of confidence increases expressiveness while being able to accurately identify access control violations.

11:20
Model-Driven Optimization: Generating Smart Mutation Operators for Multi-Objective Problems

ABSTRACT. In search-based software engineering (SBSE), the choice of search operators can significantly impact the quality of the obtained solutions and the efficiency of the search. Recent work in the context of combining SBSE with model-driven engineering has investigated the idea of automatically generating smart search operators for the case at hand. While showing improvements, this previous work focused on single-objective optimization, a restriction that prohibits a broader use for many SBSE scenarios. Furthermore, since it did not allow users to customize the generation, it could miss out on useful domain knowledge that may further improve the quality of the generated operators. To address these issues, we propose a customizable framework for generating mutation operators for multi-objective problems. It generates mutation operators in the form of model transformations that can modify solutions represented as instances of the given problem meta-model. To this end, we augment an existing framework in two main directions: First, we extend the generation procedure to support multi-objective problems. Second, we provide support for customization based on domain knowledge, including the capability to specify manual "baseline" operators that are refined during the operator generation. Our evaluation based on the Next Release Problem show that the automated generation of mutation operators and user-provided domain knowledge can improve the performance of the search without sacrificing the overall result quality.

11:45
A Context-Driven Modelling Framework for Dynamic Authentication Decisions

ABSTRACT. Nowadays, many mechanisms exist to perform authentication, such as text passwords and biometrics. However, reasoning about their relevance (e.g., the appropriateness for security and usability) regarding the contextual situation is challenging for authentication system designers. In this paper, we present a Context-driven Modelling Framework for dynamic Authentication decisions (CoFrA), where the context information specifies the relevance of authentication mechanisms. CoFrA is based on a precise metamodel that reveals framework abstractions and a set of constraints that specify their meaning. Therefore, it provides a language to determine the relevant authentication mechanisms (characterized by properties that ensure their appropriateness) in a given context. The framework supports the adaptive authentication system designers in the complex trade-off analysis between context information, risks and authentication mechanisms, according to usability, deployability, security, and privacy. We validate the proposed framework through case studies and extensive exchanges with authentication and modelling experts. We show that model instances describing real-world use cases and authentication approaches proposed in the literature can be instantiated validly according to our metamodel. This validation highlights the necessity, sufficiency, and soundness of our framework.

12:00-13:00 Session 10: [Keynote] Prof. Martin Schoeberl

Title: Open-Source Research on Time-predictable Computer Architecture

Abstract: Real-time systems need time-predictable computers to be able to guarantee that computation can be performed within a given deadline.For worst-case execution time analysis we need detailed knowledgeof the processor and memory architecture. Providing the design of a processor in open-source enables the development of worst-cease execution time analysis tools without the unsafe reverse engineering of processor architectures. Open-source software is currently the basis of many Internet services, e.g., an Apache web server running on top of Linux with a web application written in Java. Furthermore, for most programming languages in use today, there are a open-source compilers available. However, hardware designs are seldom published in open-source. Furthermore, many artifacts developed in research, especially hardware designs, are not published in open-source. The two main arguments formulated against publishing research in open source are:(1) “When I publish my source before the paper gets accepted, someone may steal my ideas” and(2) “My code is not pretty enough to publish it, I first need to clean it up (which seldom happens)”. In this paper and in the presentation I will give counterarguments for those two issues. I will present the successful T-CREST/Patmos research project, where almost all artifacts have been developed in open-source from day one. Furthermore, I will present experiences using the Google/Skywater open-sourcetool flow to produce a Patmos chip with 12 students within a one semester course.

13:00-14:30Lunch Break

Buffet lunch at Lopesan Baobab Resort.

14:30-16:00 Session 11A: Cloud, Web, and Process and Product Improvement (CNADO & SPPI)
14:30
Anomaly Detection in Cloud-Native Systems

ABSTRACT. private clouds. Since private clouds have limited resources, the systems should run efficiently by keeping performance related anomalies under control. The goal of this work is to understand whether a set of five performance-related KPIs depends on the metrics collected at runtime by Kafka, Zookeeper, and other tools (168 different metrics). We considered four weeks worth of runtime data collected from a system running in production. We trained eight Machine Learning algorithms on three weeks worth of data and tested them on one week's worth of data to compare their prediction accuracy and their training and testing time. It is possible to detect performance-related anomalies with a very high level of accuracy (higher than 95\% AUC) and with very limited training time (between 8 and 17 minutes).Machine Learning algorithms can help to identify runtime anomalies and to detect them efficiently. Future work will include the identification of a proactive approach to recognize the root cause of the anomalies and to prevent them as early as possible.

14:45
An Empirical Analysis of Microservices Systems Using Consumer-Driven Contract Testing

ABSTRACT. Testing has a prominent role in revealing faults in software based on microservices. One of the most important discussion points in MSAs is the granularity of services, often in different levels of abstraction. Similarly, the granularity of tests in MSAs is reflected in different test types. However, it is challenging to conceptualize how the overall testing architecture comes together when combining testing in different levels of abstraction for microservices. There is no empirical evidence on the overall testing architecture in such microservices implementations. Furthermore, there is a need to empirically understand how the current state of practice resonates with existing best practices on testing. In this study, we mine Github to find different candidate projects for an in-depth, qualitative assessment of their test artifacts. We find 132 repositories that use microservices and include various test artifacts. We focus on four projects that use consumer-driven-contract testing. Our results demonstrate how these projects cover different levels of testing. This study (i) drafts a testing architecture including activities and artifacts, and (ii) demonstrates how these align with best practices and guidelines. Our proposed architecture helps the categorization of system and test artifacts in empirical studies of microservices. Finally, we showcase a view of the boundaries between different levels of testing in systems using microservices.

15:10
Towards the Generation of Robust E2E Test Cases in Template-based Web Applications

ABSTRACT. Capture and Replay techniques provide a well-known solution for E2E testing of Web applications. They allow a tester to generate test scripts without requiring advanced programming skills. For this reason, they are very popular in acceptance and regression testing activities. These techniques are affected by the issue of fragility of the produced test cases, which may break even if small changes are operated in the GUI, without modifications of the app functionality. To overcome this issue, several approaches for either generating robust test cases or automatically repairing broken test cases have been proposed. In this paper we propose an alternative solution that aims at improving the testability of Web applications for generating robust test cases. This solution applies to Web apps developed with template-based technologies. It is based on the template source code automatic injection of additional hook attributes and on the proposal of a new type of locators based on such hooks. These locators aid the unique retrieval of the GUI items involved in test cases. We validated our technique in the context of a continuous integration and delivery processes of template-based web applications that was developed from scratch. The study showed that the use of hook-based locators can improve the robustness of test cases generated by a Capture \& Replay testing tool, introducing relevant savings in the regression test case repairing activity.

15:35
Towards Perspective-Based Specification of Machine Learning-Enabled Systems

ABSTRACT. Machine learning (ML) teams often work on a project just to realize the performance of the model is not good enough. The success of these systems involve aligning data with business problems, translating them into ML tasks, experimenting with algorithms, evaluating models, capturing data from users, among others. Literature has shown that ML-enabled systems are rarely built based on precise specifications for such concerns, leading ML teams to become misaligned due to incorrect assumptions, which may affect the quality of such systems and overall project success. In order to help addressing this issue, we propose and evaluate a perspective-based approach for specifying ML-enabled systems. The approach involves analyzing a validated set of ML concerns grouped into five perspectives: objectives, user experience, infrastructure, model, and data. We report a case study applying our specification approach retroactively to two industrial ML projects with the aim of validating the specifications and gathering feedback from six experienced software professionals involved in these projects. The case study indicates that the approach can be useful in practice, particularly helping to reveal important requirements that would have been missed without using the approach. Hence, it can help requirements engineers to specify ML-enabled systems by providing an overview of validated perspectives and concerns that should be analyzed together with business owners, data scientists, software engineers, and designers.

15:50
KennyRiMr: An Eclipse Plug-in to Improve Correctness of Rename Method Refactoring in Java

ABSTRACT. Rename Instance Method Refactoring (RiMr) is a behavior-preserving code transformation that changes the name of a non-static method declaration along with its references (i.e., method calls) while preserving all method bindings over an entire program. RiMr checks a set of preconditions to ensure that the original method bindings will be preserved after rename. Only when all preconditions are satisfied, are the method declaration and references transformed. Schafer et al., however, found a decade ago that RiMr offered by Java Integrated Development Environment (IDE) tools may change existing method bindings due to incorrect precondition checks, which consequently cause program behavior changes. Surprisingly, we found that none of the current Java IDEs have corrected those flaws in their RiMr preconditions. We created a Java RiMr tool (called KennyRiMr) as an Eclipse JDT plug-in that addresses the method rebinding issues in RiMr. We verified the correctness of KennyRiMr with thirteen non-trivial programs in terms of precondition checks and code transformations. Our experiments demonstrated that KennyRiMr fixed all known flaws in RiMr preconditions, requiring merely a few more seconds to process the additional precondition checks that we introduced. With KennyRiMr, correctness remains consistent no matter how large the program is.

14:30-16:00 Session 11B: Mining (STREAM)
14:30
Service Classification through Machine Learning: Aiding in the Efficient Identification of Reusable Assets in Cloud Application Development

ABSTRACT. Developing software based on services is one of the most emerging programming paradigms in software development. Service-based software development relies on the composition of services (i.e., pieces of code already built and deployed in the cloud) through orchestrated API calls. Black-box reuse can play a prominent role when using this programming paradigm, in the sense that identifying and reusing already existing/deployed services can save substantial development effort. According to the literature, identifying reusable assets (i.e., components, classes, or services) is more successful and efficient when the discovery process is domain-specific. To facilitate domain-specific service discovery, we propose a service classification approach that can categorize services to an application domain, given only the service description. To validate the accuracy of our classification approach, we have trained a machine-learning model on thousands of open-source services and tested it on 67 services developed within two companies employing service-based software development. The study results suggest that the classification algorithm can perform adequately in a test set that does not overlap with the training set; thus, being (with some confidence) transferable to other industrial cases. Additionally, we expand the body of knowledge on software categorization by highlighting sets of domains that consist 'grey-zones' in service classification.

14:55
Applicability of Software Reliability Growth Models to Open Source Software

ABSTRACT. Software reliability growth models (SRGMs) are based on underlying assumptions which make them typically more suited for quality evaluation of closed-source projects and their development lifecycles. Their usage in open-source software (OSS) projects is a subject of debate. Although the studies investigating the SRGMs applicability in OSS context do exist, they are limited by the number of models and projects considered which might lead to inconclusive results. In this paper, we present an experimental study of SRGMs applicability to a total of 88 OSS projects, comparing nine SRGMs, looking at the stability of the best models on the whole projects, on releases, on different domains and according to different projects attributes. With the aid of the STRAIT tool, we automated repository mining, data processing and SRGM analysis for reproducibility. Overall, we found good applicability of SRGMs to OSS, but with different performance when segmenting the dataset into releases, domains and considering projects attributes, suggesting that the search for one-fits-all models is unrealistic, rather recommending to look for the characteristics of projects and bug fixing processes for the prediction of applicable models.

15:20
Software Reuse and Evolution in JavaScript Applications

ABSTRACT. JavaScript (JS) is one of the most popular programming languages on GitHub. Most JavaScript applications are reusing third-party components to acquire various functionalities. Despite the benefits offered by software reuse there are still challenges, during the evolution of JavaScript applications, related to the management and maintenance of the third-party dependencies. Our key objective is to explore the evolution of library dependencies constraints in the context of JavaScript applications in terms of (a) the changeability (i.e., number of removed, added, or maintained libraries) (b) the update frequency of the library dependencies. For this purpose, we conducted a case study on the 86 most forked JavaScript applications hosted in GitHub and analyzed reuse data from a total of 2.363 successive releases. In general, 39% of the packages introduced in the first version of the project are being reused in the entire project’s lifetime. The number of package dependencies slightly grows over time, while several other are being permanently removed. Regarding the evolution of third-party applications, it is observed that developers do not update the dependencies constraints to a most recent version, waiting to reach probably “breaking points” when the updates will be inevitable.

15:45
Regularity or Anomaly? On The Use of Anomaly Detection for Fine-Grained JIT Defect Prediction

ABSTRACT. Fine-grained just-in-time defect prediction aims at identifying likely defective files within new commits pushed by developers onto a shared repository. Most of the techniques proposed in literature are based on supervised learning, where machine learning algorithms are fed with historical data. One of the limitations of these techniques is concerned with the use of imbalanced data that only contain a few defective samples to enable a proper learning phase. To overcome this problem, recent work has shown that anomaly detection methods can be used as an alternative to supervised learning, given that these do not necessarily need labelled samples. We aim at assessing how anomaly detection methods can be employed for the problem of fine-grained just-in-time defect prediction. We conduct an empirical investigation on 32 open-source projects, designing and evaluating three anomaly detection methods for fine-grained just-in-time defect prediction. However, our results are negative because anomaly detection methods, taken alone, do not overcome the prediction performance of existing machine learning solutions.

14:30-16:00 Session 11C: MDE and Architectures (MDEML & SMSE)
14:30
Search Budget in Multi-Objective Refactoring Optimization: a Model-Based Empirical Study

ABSTRACT. Software model optimization is the task of automatically generate design alternatives, usually to improve quality aspects of software that are quantifiable, like performance and reliability. In this context, multi-objective optimization techniques have been applied to help the designer find suitable trade-offs among conflicting non-functional properties. In this process, design alternatives can be generated through automated model refactoring, and evaluated on non-functional models. Due to their complexity, this type of optimization tasks require considerable time and resources, often limiting their application in software engineering processes.

In this paper, we investigate the effects of using a search budget, specifically a time limit, to the search for new solutions. We performed experiments to quantify the impact that a change in the search budget may have on the quality of solutions. Furthermore, we analyzed how different genetic algorithms (i.e., NSGA-II, SPEA2, and PESA2) perform when imposing different budgets. We experimented on two case studies of different size, complexity, and domain.

We observed that imposing a search budget considerably deteriorates the quality of the generated solutions, but the specific algorithm we choose seems to play a crucial role. From our experiments, NSGA-II is the fastest algorithm, while PESA2 generates solutions with the highest quality. Differently, SPEA2 is the slowest algorithm, and produces the solutions with the lowest quality.

14:55
Synthesis of Pareto-optimal Policies for Continuous-Time Markov Decision Processes

ABSTRACT. We present a work-in-progress method for the synthesis of continuous-time Markov decision process (CTMDP) policies—an important problem not handled by current probabilistic model checkers. The policies synthesised by this method correspond to configurations of software systems or software controllers of cyber-physical systems (CPS) that satisfy predefined nonfunctional constraints and are Pareto-optimal with respect to a set of optimisation objectives. We illustrate the effectiveness of our method by using it to synthesise optimal configurations for a client-server system, and optimal controllers for a driver-attention management CPS.

15:20
UMLsec4Edge: Extending UMLsec to model data-protection-compliant edge computing systems

ABSTRACT. Edge computing enables the processing of data – frequently personal data – at the edge of the network. For personal data, legislation such as the European General Data Protection Regulation requires data protection by design. Hence, data protection has to be accounted for in the design of edge computing systems whenever personal data is involved. This leads to specific requirements for modeling the architecture of edge computing systems, e.g., representation of data and network properties. To the best of our knowledge, no existing modeling language fulfils all these requirements. In our previous work we showed that the commonly used UML profile UMLsec fulfils some of these requirements, and can thus serve as a starting point. The aim of this paper is to create a modeling language which meets all requirements concerning the design of the architecture of edge computing systems accounting for data protection. Thus, we extend UMLsec to satisfy all requirements. We call the resulting UML profile UMLsec4Edge. We follow a systematic approach to develop UMLsec4Edge. We apply UMLsec4Edge to real-world use cases from different domains, and create appropriate deployment diagrams and class diagrams. These diagrams show how UMLsec4Edge is capable of meeting the requirements.

15:45
Sustainability in Software Architecture: A Systematic Mapping Study

ABSTRACT. Sustainability is an increasingly-studied topic in software engineering in general, and in software architecture in particular. There are already a number of secondary studies addressing sustainability in software engineering, but no such study focusing explicitly on software architecture. This work aims to fill this gap by conducting a systematic mapping study on the intersection between sustainability and software architecture research with the intention of (i) reflecting on the current state of the art, and (ii) identifying the needs for further research. Our results show that, overall, existing works have focused disproportionately on specific aspects of sustainability, and in particular on the most technical and "inward facing" ones. This comes at the expense of the holistic perspective required to address a multi-faceted concern such as sustainability. Furthermore, more reflection-oriented research works, and better coverage of the activities in the architecting life cycle are required to further the maturity of the area. Based on our findings we then propose a research agenda for sustainability-aware software architecture.

16:00-16:30Coffee Break

Expomeloneras's Hall

17:00-19:00 Social Event

At the social event, we will show you the Canarian culture and its prehispanic origins in the park Mundo Aborigen. The visitors are welcomed by a traditional aboriginal town from an outstanding location, outside of the touristic area. Finally, we will admire the ravine of Fataga, which is part of the Gran Canaria World Biosphere Reserve declared by UNESCO. At 19:00 is the comeback so you will have free time to get ready for Social Dinnner at  20:00h at the Lopesan Villa del Conde Resort & Thalasso.

19:45-22:30 Social Dinner

Social dinner at 20:00 h at the Lopesan Villa del Conde Resort & Thalasso including a traditional Canarian music concert.