SEAA 2022: EUROMICRO CONFERENCE ON SOFTWARE ENGINEERING AND ADVANCED APPLICATIONS 2022
PROGRAM FOR WEDNESDAY, AUGUST 31ST
Days:
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 2: [Keynote] Dr. Arne Hamann

Title: Designing Reliable Distributed Systems

Abstract: Software is disrupting one industry after another. Currently, the automotive industry is under pressure to innovate in the area of software. New, innovative approaches to vehicles and their HW/SW architectures are required and are currently subsumed under the term “SW-defined vehicle”. However, this trend does not stop at the vehicle boundaries, but also includes communication with off-board edge and cloud services. Thinking it through further, this leads to a breakthrough technology we call “Reliable Distributed Systems”, which enables the operation of vehicles where time and safety-critical sensing and computing tasks are no longer tied to the vehicle, but can be shifted into an edge-cloud continuum. This allows a variety of novel applications and functional improvements but also has a tremendous impact on automotive HW/SW architectures and the value chain. Reliable distributed systems are not limited to automotive use cases. The ubiquitous and reliable availability of distributed computing and sensing in real-time enable novel applications and system architectures in a variety of domains: from industrial automation over building automation to consumer robotics. However, designing reliable distributed systems raises several issues and poses new challenges for edge and cloud computing stacks as well as electronic design automation.

10:00-10:30Coffee Break

Expomeloneras's Hall

10:30-12:00 Session 3A: Data and AI (DAIDE)
10:30
Negative Transfer in Cross Project Defect Prediction: Effect of Domain Divergence

ABSTRACT. Finding software defects is challenging and time-consuming. To aid this process, software quality assurance (QA) usually relies on a software defect model to understand which part of the software module they should pay more attention to. The development of these software prediction models could be challenging in cases where there is little or no historical data. For this reason, researchers often rely on multiple sources that are related to building defect prediction models. These data are often taken from similar and related projects, but their distributions are different from the new software project (target data). These distribution differences if not correctly handled by the model may lead to negative transfer. To this end, recent works have focused on the model, but little or no understanding is known about how similar or dissimilar these multiple sources should be to avoid a negative transfer. This paper provides the first empirical investigation to understand the effect of combining different sources with different levels of similarities in cross-project defect prediction (CPDP). Our work introduces the use of the Population Stability Index (PSI) to interpret whether the distribution of the combined or single-source data is similar to the target data. This was validated using an adversarial approach. Experimental results on the AEEEM, NASA and PROMISE dataset reveals that when the distribution difference of both source and target data are very similar, the probability of false alarm is improved by 3% to 7% and the recall indicator is reduced from 1% to 8%. Interestedly, we also found that when different source data with high PSI (dissimilar to the target) are combined with different source datasets, the overall domain divergence is lowered, and the performance is greatly improved. The results highlight the importance of using the right source to aid the learning process.

10:45
Easing the Reuse of ML Solutions by Interactive Clustering-based Autotuning in Scientific Applications

ABSTRACT. Machine learning techniques have revolutionised scientific software projects. Scientists are continuously looking for novel approaches to production-quality reuse of machine learning solutions and to make them available to other components of the project with satisfactory quality and low costs. However, scientists often have limited knowledge about how to effectively reuse and adjust machine learning solutions in their particular scientific project. One challenge is that many machine learning solutions require parameter tuning based on the input data to achieve satisfactory results, which is difficult and cumbersome for users not familiar with machine learning. Autotuning is the common technique potentially adjusting the parameters based on the data, but it requires a well-defined objective function to optimize for. In exploratory scientific research such as biological image segmentation tasks, such an objective function is commonly unknown. In this paper, we propose a framework based on the novel combination of autotuning and active learning to ease and partially automate the reuse effort of machine learning solutions for scientists in biological image segmentation cases. Underlying this combination is a mapping between an object type and specific parameters applied during the segmentation process. This mapping is iteratively adjusted by asking users for visual feedback. We then through a biological case study demonstrate that our method enables tuning of the segmentation specifically to object types, while the selective requests of user input reduce the number of user interactions required for this task.

11:10
Parallel Instance Filtering for Malware Detection

ABSTRACT. Machine learning algorithms are widely used in the area of malware detection. With the growth of sample amounts, training of classification algorithms becomes more and more expensive. In addition, training data sets may contain redundant or noisy instances. The problem to be solved is how to select representative instances from large training data sets without reducing the accuracy. This work presents a new parallel instance selection algorithm called Parallel Instance Filtering (PIF). The main idea of the algorithm is to split the data set into non-overlapping subsets of instances covering the whole data set and apply a filtering process for each subset. Each subset consists of instances that have the same nearest enemy. As a result, the PIF algorithm is fast since subsets are processed independently of each other using parallel computation. We compare the PIF algorithm with several state-of-the-art instance selection algorithms on a large data set of 500,000 malicious and benign samples. The feature set was extracted using static analysis, and it includes metadata from the portable executable file format. Our experimental results demonstrate that the proposed instance selection algorithm reduces the size of a training data set significantly with the only slightly decreased accuracy. The PIF algorithm outperforms existing instance selection methods used in the experiments in terms of the ratio between average classification accuracy and storage percentage.

11:35
WALTS: Walmart AutoML Libraries, Tools and Services

ABSTRACT. Automated Machine Learning (AutoML) is an upcoming field in machine learning (ML) that searches the candidate model space for a given task, dataset and an evaluation metric and returns the best performing model on the supplied dataset as per the given metric. AutoML not only reduces the man-power and expertise needed to develop ML models but also decreases the time-to-market for ML models substantially. In Walmart, we have designed an enterprise-scale AutoML framework called WALTS to meet the rising demand of employing ML in the retail business, and thus help democratize ML within our organization. In this work, we delve into the design of WALTS from both algorithmic and architectural perspectives. Specifically, we elaborate on how we explore models from a pool of candidates along with describing our choice of technology stack to make the whole process scalable and robust. We illustrate the process with the help of a business use-case, and finally underline how WALTS has impacted our business so far.

10:30-12:00 Session 3B: Human Factors in Software Management (SM)
10:30
"There and Back Again?" On the Influence of Software Community Dispersion Over Productivity

ABSTRACT. Estimating and understanding productivity still represents a crucial task for researchers and practitioners. Researchers spent significant effort identifying the factors that influence software developers' productivity, providing several approaches for analyzing and predicting such a metric. Although different works focused on evaluating the impact of human factors on productivity, little is known about the influence of cultural/geographical diversity in software development communities. Indeed, in previous studies, researchers treated cultural aspects like an abstract concept without providing a quantitative representation. This work provides an empirical assessment of the relationship between cultural and geographical dispersion of a development community—namely, how diverse a community is in terms of cultural attitudes and geographical collocation of the members who belong to it—and its productivity. To reach our aim, we built a statistical model that contained product and socio-technical factors as independent variables to assess the correlation with productivity, i.e., the number of commits performed in a given time. Then, we ran our model considering data of 25 open-source communities on GitHub. Results of our study indicate that cultural and geographical dispersion impact productivity, thus encouraging managers and practitioners to consider such aspects during all the phases of the software development lifecycle.

10:55
STORM: A Model for Sustainably Onboarding Software Testers

ABSTRACT. Recruiting and onboarding software testing professionals are complex and cost intensive activities. Whether onboarding is successful and sustainable depends on both the employee as well as the organization and is influenced by a number of often highly individual factors. Therefore, we propose the Software Testing Onboarding Model (STORM) for sustainably onboarding software testing professionals based on existing frameworks and models taking into account onboarding processes, sustainability, and test processes. We provide detailed instructions on how to use the model and apply it to real-world onboarding processes in two industrial case studies.

11:10
On the Role of Personality Traits in Implementation Tasks: A Preliminary Investigation with Students

ABSTRACT. The Software Engineering (SE) research community has been showing an increasing interest in peopleware, which refers to anything that has to do with the role of human factors in software development. Individuals' personality is one of the human factors that can affect software development. In this paper, we present the results of a preliminary empirical study to understand whether there is a relationship between the personality traits (i.e., openness, conscientiousness, extraversion, agreeableness, and neuroticism) and productivity of undergraduate students in Computer Science (CS), and internal quality of the programs they developed in an implementation task. In our study, we involved 30 (last-year) undergraduate students in CS, who had to implement a series of features. Our results suggest that there are correlation relationships between some personality traits (i.e., conscientiousness, extraversion, and neuroticism) and software quality. As for productivity, we could not find any correlation relationship.

11:35
An 80-20 Analysis of Buggy and Non-buggy Refactorings in Open-Source Commits

ABSTRACT. In this short paper, we explore the Pareto principle, sometimes known as the "80-20" rule as part of the refactoring process. We explore five frequently applied refactorings, namely extract method, extract variable, rename variable, rename method and change variable type from a data set of forty open-source systems and nearly two hundred thousand refactorings. We address two key research questions. Firstly, do 80% of "buggy" refactorings (where a refactoring has induced a bug fix) arise from just 20% of commits and, secondly, does the same rule apply to ``non-buggy'' refactorings when applied to the same systems? To facilitate our analysis, we used refactoring and bug data from a study by Di Penta et al. Results showed that refactorings inducing bugs were clustered around a more concentrated set of commits than refactorings that did not induce bugs. One refactoring 'change variable type' stood out - it almost conformed to an 80-20 rule. The take-away message is, as the saying goes, that too much of a "good" thing [refactoring] could actually be a "bad" thing.

10:30-12:00 Session 3C: Technical Debt 1 (SEaDeM)
10:30
Quantifying TD Interest: Are we Getting Closer, or Not Even That?

ABSTRACT. Despite the attention that Technical Debt has attracted over the last years, the quantification of TD Interest still remains rather vague (and abstract). TD Interest quantification is hindered by various factors that introduce a lot of uncertainty, such as: identifying the parts of the system that will be maintained, quantifying the load of maintenance, as well as the size of the maintenance penalty, due to the existence of TD. In this study, we aim to shed light on the current approaches for quantifying TD Interest by exploring existing literature within the TD and Maintenance communities. To achieve this goal, we performed a systematic mapping study on Scopus and explored: (a) the existing approaches for quantifying TD Interest; (b) the existing approaches for estimating Maintenance Cost; and (c) the factors that must be taken into account for their quantification. The broad search process has returned more than 1,000 articles, out of which only 25 provide well-defined mathematical formulas / equations for the quantification of TD Interest or Maintenance Cost (only 6 of them are explicitly for TD Interest). The results suggest that despite their similarities, the quantification of TD Interest presents additional challenges compared to Maintenance Cost Estimation, constituting (at least for the time being) the accurate quantification of TD Interest an open and distant to solve research problem. Regarding the factors that need to be considered for such an endeavor, based on the literature: size, complexity, and business parameters are those that are more actively associated to TD Interest quantification.

10:55
Exploiting dynamic analysis for architectural smell detection: a preliminary study

ABSTRACT. Architectural anomalies, also known as architectural smells, represent the violation of design principles or decisions that impact internal software qualities with significant negative effects on maintenance and evolution costs and technical debt. Architectural smells, if early removed, have an overall impact on reducing a possible progressive architectural erosion and architectural debt. Some tools have been proposed for their detection, exploiting different techniques, usually based only on static analysis. This work analyzes how dynamic analysis can be exploited to detect architectural smells. In particular, we focus on two smells, Hub-Like Dependency and Cyclic Dependency, and we extend an existing tool integrating dynamic analysis. We conduct an empirical study on ten projects. We compare the results obtained comparing a method featuring dynamic analysis and the original version of Arcan based only on static analysis to understand if dynamic analysis can be successfully used. The results show that dynamic analysis helps identify missing code smells instances, although its usage is hindered by the lack of test suites suitable for this scope.

11:20
Microservices smell detection through dynamic analysis

ABSTRACT. The past few years saw the rise of microservices studies and best practices, along with wide industrial adoption of this architectural style. We now witness the birth of another challenging topic: microservices quality. Like other kinds of architectures, also microservices suffer from erosion and technical debt, whose symptoms can be the appearance of \textit{microservices smells}, which impact negatively on the system's quality, by hindering, for example, its maintainability. In this paper we propose a tool called Aroma, to reconstruct microservices architectures and detect microservices smells, based on the dynamic analysis of microservices execution traces. We describe the main features of the tool, the strategies adopted for microservice smells detection and the first preliminary experimentation.

11:35
ScrumBut as an Indicator of Process Debt

ABSTRACT. Technical debt analysis is used to detect problems in a codebase. Most technical debt indicators rely on measuring or analysing the code itself. However, developers often induce recurring technical debt as a cause of bad practices that emerge along evolution cycles. This can happen when project pressure leads to process deviations. In agile practices like Scrum, such deviations are commonly known as ScrumButs, which can be considered as a form of process debt. In this paper, we investigate the role of code smells and anti-patterns -- two concepts closely related to technical debt -- and their impact on process debt and ScrumButs. As a concrete contribution, we report typical ScrumBut practices found in agile projects in a company. Our initial results found a relationship between problems in code and ScrumBut issues, as a form of process debt.

12:00-13:00 Session 4: [Keynote] Dr. Heiko Koziolek

Title: Software Architecture Challenges in Industrial Process Automation: from Code Generation to Cloud-native Service Orchestration

Abstract: Large, distributed software systems with integrated embedded systems support production plant operators in controlling and supervising complex industrial processes, such as power generation, chemical refinement, or paper production. With several million lines of code these Operational Technology (OT) systems grow continuously more complex, while customers increasingly expect a higher degree of automation, easier customization, and faster time-to-market for new features. This has led to an ongoing adoption of modern Information Technology (IT) reference software architectures and approaches, e.g., middlewares, model-based development, and microservices. This talk presents illustrative examples of this trend from technology transfer projects at ABB Research, highlighting open issues and research challenges. These include information modeling in M2M middlewares for plug-and-play functionality, code generation from engineering requirements to speed up customization, as well as online updates of containerized control software on virtualized infrastructures.

13:00-14:30Lunch Break

Buffet lunch at Lopesan Baobab Resort.

14:30-16:00 Session 5A: Experimentation and Model Performance (DAIDE)
Chair:
14:30
Evaluating Simple and Complex Models’ Performance When Predicting Accepted Answers on Stack Overflow

ABSTRACT. Stack Overflow is used to solve programming issues during software development. Research efforts have looked to identify relevant content on this platform. In particular, researchers have proposed various modelling techniques to predict acceptable Stack Overflow answers. Less interest, however, has been dedicated to examining the performance and quality of typically used modelling methods with respect to the model and feature complexity. Such insights could be of practical significance to the many practitioners who develop models for Stack Overflow. This study examines the performance and quality of two modelling methods, of varying degree of complexity, used for predicting Java and JavaScript acceptable answers on Stack Overflow. Our dataset comprised 249,588 posts drawn from years 2014–2016. Outcomes reveal significant differences in models’ performances and quality given the type of features and complexity of models used. Researchers examining model performance and quality and feature complexity may leverage these findings in selecting suitable modelling approaches for Q&A prediction.

14:55
STUN: an Embedding-Based Corpus Comparison Technique for Qualitative User Feedback in A/B Tests

ABSTRACT. Qualitative user feedback can provide valuable insights for A/B tests. However, today, we lack a technique for extracting insights and then statistical testing for differences between conditions. In this paper, we present STUN-Statistical Testing on Unstructured Natural-language-for analysis of qualitative user feedback in A/B tests. We use data from real-world large-scale digital A/B tests to demonstrate its efficacy and utility.

15:20
EMMM: A Unified Meta-Model for Tracking Machine Learning Experiments

ABSTRACT. Traditional software engineering tools for managing assets—specifically, version control systems—are inadequate to manage the variety of asset types used in machine-learning model development experiments. Two possible paths to improve the management of machine learning assets include: 1) Adopting dedicated machine-learning experiment management tools, which are gaining popularity for supporting concerns such as versioning, traceability, audibility, collaboration, and reproducibility; 2) Developing new and improved version control tools with support for domain-specific operations tailored to machine learning assets. As a contribution to improving asset management on both paths, this work presents Experiment Management Meta-Model (EMMM), a meta-model that unifies the conceptual structures and relationships extracted from systematically selected machine-learning experiment management tools. We explain the meta-model's concepts and relationships and evaluate it using real experiment data. The proposed meta-model is based on the Eclipse Modeling Framework (EMF) with its meta-modeling language, Ecore, to encode model structures. Our meta-model can be used as a concrete blueprint for practitioners and researchers to improve existing tools and develop new tools with native support for machine-learning-specific assets and operations.

15:45
Reducing Experiment Costs in Automated Software Performance Regression Detection

ABSTRACT. In this position paper, we formulate performance regression testing as an automated experimentation problem and focus on the problem of controlling the experiment so as to provide more computation time to experiments that are more likely to detect performance changes. Conversely, this requires detecting and stopping experiments early if they are unlikely to detect any performance changes. To this end, we present a method that uses results from previous performance testing experiments to predict the outcome of new experiments in early stages of their execution.

14:30-16:00 Session 5B: Effort Estimation 1 (SM)
14:30
A Preliminary Conceptualization and Analysis on Automated Static Analysis Tools for Vulnerability Detection in Android Apps

ABSTRACT. The availability of dependable mobile apps is a crucial need for over three billion people who use apps daily for any social and emergency connectivity. A key challenge for mobile developers concerns the detection of security-related issues. While a number of tools have been proposed over the years—especially for the ANDROID operating system—we point out a lack of empirical investigations on the actual support provided by these tools; these might guide developers in selecting the most appropriate instruments to improve their apps. In this paper, we propose a preliminary conceptualization of the vulnerabilities detected by three automated static analysis tools such as ANDROBUGS2, TRUESEEING, and INSIDER. We first derive a taxonomy of the issues detectable by the tools. Then, we run the tools against a dataset composed of 6,500 ANDROID apps to investigate their detection capabilities in terms of frequency of detection of vulnerabilities and complementarity among tools. Key findings of the study show that current tools identify similar concerns, but they use different naming conventions. Perhaps more importantly, the tools only partially cover the most common vulnerabilities classified by the Open Web Application Security Project (OWASP) Foundation.

14:55
An Evaluation of Effort-Aware Fine-Grained Just-in-Time Defect Prediction Methods

ABSTRACT. CONTEXT: Software defect prediction (SDP) is an active research topic to support software quality assurance (SQA) activities. It was observed that unsupervised prediction models were often competitive with supervised ones at release-level and change-level defect prediction. Fine-grained just-in-time defect prediction focuses on defective files in a change, rather than the whole change. A recent study showed that the fine-grained just-in-time defect prediction was cost-effective in terms of effort-aware performance measures. Those studies did not explore the effectiveness of supervised and unsupervised models at that finer level in terms of effort-aware performance measures. OBJECTIVE: To examine the performance of supervised and unsupervised prediction models in the context of fine-grained defect prediction in terms of effort-aware performance measures. METHOD: Experiments with a time-sensitive approach were conducted to evaluate the predictive performance of supervised and unsupervised methods proposed in past studies. Datasets from OSS projects with manually validated defect links were supplied. RESULTS: The use of manually validated links led to low-performance results. No clear difference among supervised and unsupervised methods was found while CBS+, a supervised method, was the best method in terms of F-measure. Even CBS+ did not achieve reasonable performance. A non-linear learning algorithm did not help the performance improvement. CONCLUSION: No clear preference among unsupervised and supervised methods. CBS+ was the best method on average. The predictive performance was still a challenge.

15:20
The Impact of Parameters Optimization in Software Prediction Models

ABSTRACT. Several studies have raised concerns about the performance of estimation techniques if employed with default parameters provided by specific development toolkits, e.g., Weka. In this paper, we evaluate the impact of parameter optimization with nine different estimation techniques in the Software Development Effort Estimation (SDEE) and Software Fault Prediction (SFP) domains to provide more generic findings of the impact of parameter optimization. To this aim, we employ three datasets from the domain of SDEE (China, Maxwell, Nasa) and three different regression-based datasets from the SFP domain (Ant, Xalan, Xerces). Regarding parameter optimization, we consider four optimization algorithms from different families: Grid Search and Random Search, Simulated Annealing, and Bayesian Optimization. The estimation techniques are: Support Vector Machine, Random Forest, Classification and Regression Tree, Neural Networks, Averaged Neural Networks, k-Nearest Neighbor, Partial Least Square, MultiLayer Perceptron, and Gradient Boosting Machine. Results reveal that, with both SDEE and SFP datasets, seven out of nine estimation techniques require optimization/configuration of at least one parameter. In majority of the cases, the parameters of the employed estimation techniques are sensitive to the optimization of specific types of data. Moreover, not all the parameters need to be optimized as some of them are not sensitive to optimization.

15:45
Using COSMIC to measure functional size of software: a Systematic Literature Review

ABSTRACT. COSMIC is a second generation FSM method, widely applied for estimating software development effort. However, it is still less frequently used compared to first generation methods (e.g., Function Points Analysis) and thus less consolidated in the literature. In order to highlight its usefulness, it is essential to summarize the existing evidence on how COSMIC has been employed and how it has performed through the years since its creation, both in the academia and the industry.

In this paper, we present a systematic literature review we performed to analyze the studies employing COSMIC to measure the functional size of software. In particular, the aim of our review is to understand and summarize the application of COSMIC as well as to focus on the most frequent techniques used in combination with COSMIC to build software prediction models.

The results reveal that COSMIC is widely used for software development effort estimation, which is a crucial management task that critically depends on the adopted size measure. The analysis reveals that it is considered to be suitable for a broader range of application domains, e.g., Web applications, Mobile app, with respect to 1st generation FSM method like Function Points Analysis and its adaptations/extensions. Furthermore, the review shows that a lot has been done also for automating the calculation of the functional size in terms of COSMIC, starting from software documentation available in the early phases of the development process. In the direction of simplifying the application of COSMIC, studies have also evaluated the effectiveness of its approximations, e.g., for estimating software development effort.

14:30-16:00 Session 5C: Technical Debt 2 (SEaDeM)
14:30
The Impact of Forced Working-From-Home on Code Technical Debt: An Industrial Case Study

ABSTRACT. Background: The COVID-19 outbreak interrupted regular activities for over a year in many countries and resulted in a radical change in ways of working for software development companies, i.e., most software development companies switched to a forced Working-From-Home (WFH) mode. Aim: Although several studies have analysed different aspects of forced WFH mode, it is unknown whether and to what extent WFH impacted the accumulation of technical debt (TD) when developers have different ways to coordinate and communicate with peers. Method: Using the year 2019 as a baseline, we carried out an industrial case study to analyse the evolution of TD in five components that are part of a large project while WFH. As part of the data collection, we carried out a focus group with developers to explain the different patterns observed from the quantitative data analysis. Results: TD accumulated at a slower pace during WFH as compared with the working-from-office period in four components out of five. These differences were found to be statistically significant. Through a focus group, we have identified different factors that might explain the changes in TD accumulation. One of these factors is responsibility diffusion which seems to explain why TD grows faster during the WFH period in one of the components. Conclusion: The results suggest that when the ways of working change, the change between working from office and working from home does not result in an increased accumulation of TD.

14:55
Adopting DevOps Paradigm in Technical Debt Prioritization and Mitigation

ABSTRACT. The constantly growing amount of software in use, accompanied by huge amount of technical debt, gradually raises concern in the industry. New technologies and software development processes become yet another degree of freedom boosting the complexity. As the software development and delivery techniques evolve, technical debt perspective should follow. Taking into account all software artefacts enabling value delivery to customers, embracing DevOps paradigm and its holistic focus on software development lifecycle, the strategy presented in this paper enabled stabilization of a large telecommunication software system after a set of consecutive complex merges. The research question of this paper looks for evidence whether prioritization of technical debt mitigation efforts bring a faster return on investment. A 2-year-long case study focused on technical debt prioritization and mitigation that was conducted on this software system resulted in improved quality and stabilization of feature development efforts (cost and time based). Therefore, the tangible gains from applying this approach comprise over 50% decrease in stability issues, improved screening by over 30%, and 6 times better predictability of delivery time (reducing allocation of stabilization effort and time).

15:20
Timing is Everything! A Test and Production Class View of Self-Admitted Technical Debt

ABSTRACT. In this short paper, we investigate whether the "time of day" when recognised changes are made to code influences the self-admission of technical debt (SATD). We look at this question from a test and production class perspective. We examine if there is a specific time of day when technical debt is "self-admitted" more frequently and whether there are any similarities in this sense between test and production classes. We also analyse whether class complexity makes a difference to SATD occurrence. To facilitate our analysis, we used a data set of over 300k changes developed by Riquet et al., as a basis. Results suggest that a lower proportion of SATD occur in afternoons as opposed to mornings and that class complexity has a significant say in the role and application of SATD.

15:35
Technical Debt Management in Automotive Software Industry

ABSTRACT. The suppliers of software-intensive electronic automotive components are facing technical challenges due to the innovation rush and the growing time pressure from customers. As the quality of on-board automotive electronic systems is strongly dependent on the quality of their development practices, car manufacturers and suppliers proactively focus on improving technical and organizational processes. For more than a decade, Automotive SPICE (ASPICE) has been the reference standard for assessing and improving automotive electronics processes and projects in this setting. As car manufacturers use ASPICE to qualify their suppliers of software-intensive systems, such a standard becomes a market demand. ASPICE is so widespread today that it has shaped how many automotive suppliers conduct software development projects. This paper identifies and discusses the benefits and impact of the integration and harmonization of Technical Debt Management in an ASPICE-compliant software development project. Besides, with the support of BPMN-based graphs, this paper provides a conceptual framework and a reference process description for the integration of ASPICE and Technical Debt Management practices in a sample of Software Engineering processes.

16:00-16:30Coffee Break

Expomeloneras's Hall