SEAA 2022: EUROMICRO CONFERENCE ON SOFTWARE ENGINEERING AND ADVANCED APPLICATIONS 2022
PROGRAM FOR FRIDAY, SEPTEMBER 2ND
Days:
previous day
all days

View: session overviewtalk overview

10:00-11:00 Session 13: [Keynote] Prof. Marisa Lopez-Vallejo

Title: Looking for the limits of electronics for autonomous microsystems

Abstract: Autonomous microsystems are microscale systems that do not need external power to operate and communicate for a given period of time. If we can build autonomous microsystems even with dimensions as small as the diameter of a human hair (< 100 μm) new use cases for sensing applications could be addressed. For example, microsensors could be embedded into fibers to produce smart clothing, new approaches to in-vitro and in-body sensing could be performed, etc. This keynote will address the challenges that electronic circuits must meet to be part of and support the design and integration of autonomous microsystems.

11:00-11:30Coffee Break

Expomeloneras's Hall

11:30-13:00 Session 14A: Testing (SPPI)
11:30
Have Java Production Methods Co-Evolved With Test Methods Properly?: A Fine-Grained Repository-Based Co-Evolution Analysis

ABSTRACT. Any source code of a software product (production code) is expected to be tested to ensure its correct behavior. Whenever a developer updates production code, the developer should also update or create the corresponding test code to check if the updated parts still work correctly. Such a desirable co-evolution relationship between production and test code forms a logical coupling through the code change history. Although the logical coupling is detectable through an association analysis on the code repository such as Git, the detection granularity is coarse because the conventional repository is at the file level. For observing those logical couplings as precisely as possible, this paper utilizes the finer-grained, Java method-level repository (FinerGit) rather than the conventional file-level one (Git). Then the paper proposes a metric measuring the extent to which a production method has co-evolved with test methods and conducts a case study using ten open-source projects to examine how the proposed metric works. The results show that most Java methods (98% on average) have properly co-evolved with test methods, but some have not; The proposed metric helps detect those Java methods having the potential risk that the developers might not test adequately.

11:45
Change-Aware Regression Test Prioritization using Genetic Algorithms

ABSTRACT. Regression testing is a practice aimed at providing confidence that, within software maintenance, the changes in the code base have introduced no faults in previously validated functionalities. With the software industry shifting towards iterative and incremental development with shorter release cycles, the straightforward approach of re-executing the entire test suite on each new version of the software is often unfeasible due to time and resource constraints. In such scenarios, Test Case Prioritization (TCP) strategies aim at providing an effective ordering of the test suite, so that the tests that are more likely to expose faults are executed earlier and fault detection is maximised even when test execution needs to be abruptly terminated due to external constraints.

In this work, we propose Genetic-Diff, a TCP strategy based on a genetic algorithm featuring a specifically-designed crossover operator and a novel objective function that combines code coverage metrics with an analysis of changes in the code base. We empirically evaluate the proposed algorithm on several releases of three heterogeneous real-world, open source Java projects, in which we artificially injected faults, and compare the results with other state-of-the-art TCP techniques using fault-detection rate metrics. Findings show that the proposed technique performs generally better than the baselines, especially when there is a limited amount of code changes, which is a common scenario in modern development practices.

12:10
An Evaluation of General-Purpose Static Analysis Tools on C/C++ Test Code

ABSTRACT. In recent years, maintaining test code quality has gained more attention due to increased automation and the growing focus on issues caused during this process.

Test code may become long and complex, but maintaining its quality is mostly a manual process, that may not scale in big software projects. Moreover, bugs in test code may give a false impression about the correctness or performance of the production code. Static program analysis (SPA) tools are being used to maintain the quality of software projects nowadays. However, these tools are either not used to analyse test code, or any analysis results on the test code are suppressed. This is especially true since SPA tools are not tailored to generate precise warnings on test code.

This paper investigates the use of SPA on test code by employing three state-of-the-art static analysers on a curated set of projects used in the industry and a random sample of relatively popular and large open-source C/C++ projects. We have found a number of built-in code checking modules, commonly called checkers, that can detect quality issues in the test code. However, these checkers need some tailoring to obtain relevant results. We observed design choices in test frameworks that raise noisy warnings in analysers and propose a set of augmentations to the checkers or the analysis framework to obtain precise warnings from static analysers.

12:35
Investigating the Adoption of History-based Prioritization in the Context of Manual Testing in a Real Industrial Setting

ABSTRACT. Many test case prioritization techniques have been proposed with the ultimate goal of speeding up fault detection. History-based prioritization, in particular, has been shown to be an effective strategy. Most of the empirical evaluation conducted on this topic, however, has focused in the context of automated testing. Investigating the effectiveness of history-based prioritization in the context of manual testing is important because, despite the popularity of automated approaches, manual testing is still largely adopted in industry. In this work we propose two history-based prioritization heuristics and evaluate them in the context of manual testing in a real industrial setting. The results of our experiments using historical test execution data from real subjects and with real faults showed that the effectiveness of the proposed approach is not far from a theoretical optimal prioritization, and that they are significantly better than alternative orderings of the test suite, including the order suggested by the test management tool and the execution order followed by the testers during the real execution of the test suites evaluated as part of our study.

11:30-13:00 Session 14B: CPS 1 (CPS)
11:30
Exploring the impact of scenario and distance information on the reliability assessment of multi-sensor systems

ABSTRACT. With the phenomenal growth of self--driving technologies, the reliability analysis of automated driving systems has received considerable attention from both academia and industry. Safety of the intended functionality (SOTIF) serves as one of the primary standards to assure the reliability and safety of the automated driving system. One of its key issues is the performance limitations of perception sensor systems. Generally, the reliability of the perception sensor system depends on the different scenarios of the driving environment. In this work, we investigate the sensor features and dependencies of the front camera and the top LiDAR of the nuTonomy scenes (nuScenes) dataset with respect to scenarios (e.g., rain and night) and distance information (e.g., two distance-based regions of interest). In addition, we apply the obtained parameters to a proven analytical reliability model to examine the impact of scenario and distance information on the reliability assessment.

11:55
An Industrial Experience Report about Challenges from Continuous Monitoring, Improvement, and Deployment for Autonomous Driving Features

ABSTRACT. Using continuous development, deployment, and monitoring (CDDM) to understand and improve applications in a customer’s context is widely used for non-safety applications such as smartphone apps or web applications to enable rapid and innovative feature improvements. Having demonstrated its potential in such domains, it may have the potential to also improve the software development for automotive functions as some OEMs described on a high level in their financial company communiques. However, the application of CDDM strategy also faces challenges from a process adherence and documentation perspective as required by safety related products such as autonomous driving systems (ADS) and guided by industry standards such as ISO-26262 [1] and ISO21448 [2]. There are publications on CDDM in safety-relevant contexts that focus on safety-critical functions on a rather generic level and thus, not specifically ADS or automotive, or that are concentrating only on software and hence, missing out the particular context of an automotive OEM: Well-established legacy processes and the need of their adaptations, and aspects originating from the role of being a system integrator for software/software, hardware/hardware and hardware/software. In this paper, particular challenges from the automotive domain to better adopt CDDM are identified and discussed to shed light on research gaps to enhance CDDM especially for the software development of safe ADS. The challenges are identified from today’s industrial well-established ways of working by conducting interviews from domain experts and complemented by a literature study.

12:20
Risk and Engineering Knowledge Integration in Cyber-physical Production Systems Engineering

ABSTRACT. In agile Cyber-physical Production Systems (CPPS) engineering, multidisciplinary teams work concurrently and iteratively on various CPPS engineering artifacts, based on engineering models and Product-Process-Resource (PPR) knowledge to design and build a production system. However, in such settings it is difficult to keep track of (i) the effects of changes across engineering disciplines, and (ii) their implications on risks to engineering quality, represented in Failure Mode and Effects Analysis (FMEA). To tackle these challenges and systematically co-evolve FMEA and PPR models, it is necessary to propagate and validate changes across engineering and FMEA artifacts. To this end, we design and evaluate a Multi-view FMEA+PPR (MvFMEA+PPR) meta-model to represent relationships between FMEA elements and CPPS engineering assets and trace their change states and dependencies in the design and validation lifecycle. We evaluate the MvFMEA+PPR meta-model in a feasibility study on the quality of a screwing process from automotive production. The study results indicate the MvFMEA+PPR meta-model to be more effective than alternative traditional approaches.

12:45
Bad Smells in Industrial Automation: Sniffing out Feature Envy

ABSTRACT. Bad Smells are sub-optimal software structures or patterns. They can obstruct understandability of a software system and cause maintenance issues. As a result, it is critical to avoid Bad Smells. While the subject is well-researched in software engineering, it remains an unresolved issue in industrial automation, e.g., when developing control software in the context of a cyber-physical production system (CPPS). In this short paper, we present possible detection methods for Feature Envy, a smell that indicates bad modularization of a software system. We explain how these methods can be applied to analyze control software developed in IEC 61499. We present first results as well as next steps.

11:30-13:00 Session 14C: SLRs 1 (SMSE)
11:30
Aligning Platform Ecosystems Through Product Roadmapping: Systematic Mapping Study and Research Agenda

ABSTRACT. Providing a digital infrastructure, platform technologies foster the interfirm collaboration between loosely coupled companies, enabling the formation of ecosystems and building the organizational structure for value co-creation. Despite the known potential, the development of platform ecosystems creates new sources of complexity and uncertainty due to the involvement of various independent actors. For a platform ecosystem to be successful, it is essential that the ecosystem participants are aligned, coordinated, and given a common direction. Traditionally, product roadmaps have served these purposes during product development. To gain a better understanding of how product roadmapping could be used in the dynamic environment of platform ecosystems, a systematic mapping study was conducted. One result of the study is that there are hardly any concrete approaches for product roadmapping in platform ecosystems so far. However, many challenges on the topic are described in the literature from different perspectives. Based on the results of the systematic mapping study, a research agenda for product roadmapping in platform ecosystems is derived and presented.

11:55
How are software datasets constructed in Empirical Software Engineering studies? A systematic mapping study

ABSTRACT. Context: Software projects are common inputs in Empirical Software Engineering (ESE) studies, although they are often selected with ad-hoc strategies that reduce the generalizability of the results. An alternative is the usage of available datasets of software projects, which should be current and follow explicit rules for ensuring their validity over time. Goal: In this context, it is important to assess the general state of software datasets in terms of purpose, validity, project characterization, source code metrics, and tools to extract source-code-related artifacts, among others. Method: We conducted a systematic mapping study retrieving software datasets used in ESE studies published from January 2013 to December 2021. Results: We selected 74 datasets created mainly for software defects, software estimation, and software maintainability studies. The majority of these datasets (64%) explicitly stated the characteristics to select the projects, and the most common programming languages were Java and C. Conclusions: Our study identified scarce efforts to keep datasets updated over time and provides a set of recommendations to support their construction and consumption for ESE studies.

12:20
API Deprecation: A Systematic Mapping Study

ABSTRACT. Application Programming Interfaces (APIs) are the prevalent interaction method for software modules, components, and systems. As systems and APIs evolve, an API element may be marked as deprecated, indicating that its use is disapproved or that the feature will be removed in an upcoming version. Consequently, deprecation is a means of communication between developers and, ideally, complemented by further documentation, including suggestions for the developers of the API's clients. API deprecation is a relatively young research area that recently gained traction among researchers. To identify the current state of research as well as to identify open research areas, a meta-study that assesses scientific studies is necessary. Therefore, this paper presents a systematic mapping study on API deprecation to classify the state of the art and identify gaps in the research field. We identified and mapped 36 primary studies into a classification scheme comprising general and API-specific categories. Furthermore, we identified five major gaps in previous research on API deprecation.

12:45
Automotive Service-oriented Architectures: a Systematic Mapping Study

ABSTRACT. Service-oriented architectures are emerging as a promising solution to deal with the increasing complexity of automotive software systems. In this paper, we conduct a systematic mapping study to investigate the use of service-oriented architecture for the development of automotive software systems. This study aims at providing publication trends, available architectural solutions, core benefits and open challenges in the automotive service-oriented architectures. From an initial set of 341 peer-reviewed publications, we select 28 primary studies, which are classified and analysed using a systematic and comprehensive protocol. Using the extracted data, we provide both quantitative and qualitative analyses using vertical and orthogonal analysis. The results indicate that there has been a significant increase in the number of publications recently, and that the studies focused on defining functionalities and data flows among them. Functional suitability is found to be the most recognised benefit while security, safety and reliability are the most addressed challenges when utilising service-oriented architectures in the automotive domain.

13:00-14:30Lunch Break

Buffet lunch at Lopesan Baobab Resort.

14:30-16:00 Session 15A: Agile and Embedded (SPPI)
14:30
Towards Secure Agile Software Development Process: A Practice-Based Model

ABSTRACT. Agile methods are a well-established paradigm in the software development field. Agile adoption has contributed to improving software quality. However, software products are vulnerable to security challenges and susceptible to cyberattacks. This study aims to improve security of software products when using an agile software development process. A multi-methods qualitative research approach was adopted in this study. First, we conducted semi-structured interviews with 23 agile practitioners having varied years of cybersecurity experiences. An approach informed by grounded theory methodology was adopted for data analysis. Second, we developed a novel practice-based agile software development process model derived from the results of the data analysis conducted. Third, we validated the model through a focus group comprising five senior agile cybersecurity professionals to evaluate its relevancy and novelty. The study has identified 26 security practices, organized into the six - software development life-cycle phases: planning, requirements, design, implementation, testing, and deployment. We have mapped the practices onto four swim lanes each representing an agile role. The self-organizing team is exclusively involved in three security practices, the security specialist in nine, penetration tester in one and the DevOps team collaborates on one with the security specialist. There are also seven practices that are collaboratively performed by the self-organizing team and the security specialist. Each of the practices in the model was examined during the validation phase of the study. There are two contributions in this study. First, the paper proposes a novel practice-based model comprising of 26 security practices mapped to agile roles. Second, we propose a new practice, in response to an observed lack of collaborative ceremonies, to disseminate awareness of and hence compliance with security standards.

14:55
Agile Enterprise Transformations: Surveying the Many Facets of Agility for the Hybrid Era

ABSTRACT. Agile companies are not uniform. Consequently, agile transformations are conceived broadly, ranging from adopting agile methods and practices in software development teams or functions to building all-encompassing enterprise agility. Moreover, the targeted effects of agility may vary, and the success of transformations and the attainment of agility are measured in various ways. In this paper, based on a recent industrial survey study, we scrutinize holistically why companies want to transform, what types of agility they are aiming at, and how they gauge transformations. The survey data was collected during the COVID-19 pandemic in 2020. Most of the respondents were in large or very large companies in Finland and Sweden in diverse industry domains. The main findings indicate that there are many reasons for companies to transform both to improve external outcomes (fore mostly responsiveness) and to develop internal capabilities (adaptability, organizational learning). Companies seemed to have aims and goals with respect to all types of agility, including business agility. As the nature of transformations and the companies’ aims and goals vary, the transformations follow various means and measures. As a conclusion, for the hybrid era, we advise companies to consider how agility has benefited during the pandemic era, how hybrid work possibly affects the goals for agile transformations and the different facets of agility, and how to sustain agility in hybrid work.

15:10
Living in a Pink Cloud or Fighting a Whack-a-Mole? On the Creation of Recurring Revenue Streams in the Embedded Systems Domain

ABSTRACT. For companies in the embedded systems domain, digitalization and digital technologies allow endless opportunities for new business models and continuous value delivery. While physical products still provide the core revenue, these are rapidly being complemented with offerings that allow for recurring revenue and that are based on software, data and artificial intelligence (AI). However, while new digital offerings allow for fundamentally new and recurring revenue streams and continuous value delivery to customers, the creation of these proves to be a challenging endeavour. In this paper, we study how companies explore ways to create new or additional value with the intention to complement their product portfolio with offerings that allow for recurring revenue. Based on multi-case study research, we identify the key challenges that companies in the embedded systems domain experience and we derive four organizational patterns that we see slow down innovation. Second, we present a framework outlining alternative types of offerings to customers. Third, we provide a value taxonomy in which we detail the different types of offerings and the value these provide to customers. For each value offering, we indicate whether this offering is (1) static or evolving, (2) bundled or unbundled, (3) free or monetized, and we provide examples from the case companies we studied.

15:35
The Role Of Post-Release Software Traceability in Release Engineering: A Software-Intensive Embedded Systems Case Study From The Telecommunications Domain

ABSTRACT. Modern release engineering practices such as continuous integration and delivery have allowed software development companies to transition from a long release cycle to a shorter one. The shorter release cycle has led to more software releases available to customers. At the same time, companies developing high volume software-intensive embedded systems often deliver patch releases and maintenance releases on top of major and minor releases to customers who pick and choose what releases apply to them and decide when to upgrade the system, if to upgrade at all. While release engineering has been studied before in web-based, desktop-based, and embedded software, the focus has been on pre-release activities. Few studies have investigated what happens after the release, particularly the role of tracing software from release to deployment in high-volume software-intensive embedded systems. We conducted a qualitative case study at a multi-national telecommunications systems provider focusing on Radio Access Network (RAN) software to address this gap. RAN software is a complex and large-scale embedded software used in mobile networks Base Stations (BS), providing software functionality for RAN mobile technologies ranging from 2G to 5G. Our study shed light on post-release software traceability and how it is used in the release engineering process.

14:30-16:00 Session 15B: CPS 2 (CPS)
14:30
RIPOSTE: A Collaborative Cyber Attack Response Framework for Automotive Systems

ABSTRACT. The automotive domain has got its own share of advancements in information and communication technology, providing more services and leading to more connectivity. However, more connectivity and openness raise cyber security and safety concerns. Indeed, services that depend on online connectivity can serve as entry points for attacks on different assets of the vehicle. This study explores collaborative ways of selecting response techniques to counter real-time cyber attacks on automotive systems. The aim is to mitigate the attacks more quickly than a single vehicle would be able to do, and increase the survivability chances of the collaborating vehicles. To achieve that, the design science research methodology is employed. As a result, we present RIPOSTE, a framework for collaborative real-time evaluation and selection of suitable response techniques when an attack is in progress. We evaluate the framework from a safety perspective by conducting a qualitative study involving domain experts. The proposed framework is deemed slightly unsafe, and insights into how to improve the overall safety of the framework are provided.

14:55
Metamorphic Testing in Autonomous System Simulations

ABSTRACT. Metamorphic testing has proven to be effective for test case generation and fault detection in many domains. It is a software testing strategy that uses certain relations between input-output pairs of a program, referred to as metamorphic relations. In this paper, we provide an overview of metamorphic testing as well as an implementation in the autonomous systems domain. We implement an obstacle detection and avoidance task in autonomous drones utilising the GNC API alongside our gazebo-based simulation. Particularly, we describe a general approach for the development of metamorphic relations and we apply four (4) metamorphic relations for metamorphic testing of single and more than one drones simultaneously. Our relations reveal several properties and some weak spots of both the implementation and the avoidance algorithm in the light of metamorphic testing. The results indicate that this testing strategy shows great potential in the autonomous systems domain and should be recommended to developers in this field.

15:20
End-to-end Timing Model Extraction from TSN-Aware Distributed Vehicle Software

ABSTRACT. Timing requirements in predictable distributed embedded software systems can be verified by performing end-to-end timing analysis. To perform such an analysis, end-to-end timing information should be extracted form the software architectures of these systems. In this paper, we first present a comprehensive end-to-end timing model of distributed embedded software systems that use Time Sensitive Networking (TSN) for network communication. This is the first timing model that captures comprehensive timing information of TSN as part of the end-to-end timing model. We then present a systematic automated method to extract the end-to-end timing model from the software architectures of distributed embedded systems. As a proof of concept, we implement the timing model and extraction method in an industrial component model, namely the Rubus Component Model (RCM), and in its tool chain. We evaluate the proposed method and demonstrate its usability on an industrial use case from the vehicular domain.

15:35
Mitigating Risk in Neural Network Classifiers

ABSTRACT. Deep Neural Network (DNN) classifiers have been shown to perform remarkably well on a set of problems which seem to require skills that are natural and intuitive to humans. Recently these classifiers have been deployed in safety-critical applications including autonomous driving. For such systems to be trusted it is necessary for us to demonstrate that the risk factors associated with neural network classification have been appropriately considered and sufficient risk mitigation has been employed. Traditional DNNs fail to explicitly consider risk during their training and verification stages, meaning that unsafe failure modes are permitted and under-reported. To address this limitation, our short paper introduces a work-in-progress approach that (i) allows the risk of misclassification between classes to be quantified, (ii) guides the training of DNN classifiers towards mitigating the risks that require treatment, and (iii) synthesises risk-aware ensembles with the aid of multi-objective genetic algorithms that seek to optimise DNN performance metrics while also mitigating risks. To show the effectiveness of our approach, we synthesise risk-aware neural network ensembles for the CIFAR-10 dataset. We present our results as Pareto-optimal fronts which demonstrate significantly improved performance with respect to risk mitigation and F1 score.

14:30-16:00 Session 15C: SLRs 2 (SMSE)
14:30
Towards Continuous Systematic Literature Review in Software Engineering

ABSTRACT. Context: New scientific evidence continuously arises with advances in Software Engineering (SE) research. Conventionally, Systematic Literature Reviews (SLRs) are not updated or updated intermittently, leaving gaps between updates, during which time the SLR may be missing crucial new evidence. Goal: We propose and evaluate a concept and process called Continuous Systematic Literature Review (CSLR) in SE. Method: To elaborate on the CSLR concept and process, we performed a synthesis of evidence by conducting a meta-ethnography, addressing knowledge from varied research areas. Furthermore, we conducted a case study to evaluate the CSLR process. Results: We describe the resulting CSLR process in BPMN format. The case study results provide indications on the importance and feasibility of applying CSLR in practice to continuously update SLR evidence in SE. Conclusion: The CSLR concept and process provide a systematic way to continuously incorporate new evidence into SLRs, supporting trustworthy and up-to-date evidence for SLRs in SE.

14:55
A Systematic Mapping Review on Robotic Testing of Mobile Devices

ABSTRACT. \emph{Context}: Test automation is often seen as a possible solution to overcome the challenges of testing mobile devices. However, most of the automation techniques adopted for mobile testing are intrusive and, sometimes, unrealistic.One possible solution for coping with intrusive and unrealistic testing is the use of robots. Despite the growing interest in the intersection between robotics and software testing, the motivations, the usefulness, and the return of investment of adopting robots for supporting testing activities are not clear. % \emph{Objective}: We aim at surveying the literature on the use of robotics for supporting mobile testing with a focus on the motivations, the types of tests that are automated, and the reported effectiveness/efficiency. % \emph{Method}: We conduct a systematic literature review on robotic testing of mobile devices (hereafter, referred as robotic mobile testing). We searched primary studies published since 2000 by querying four digital libraries, and by performing a snowballing cycle. % \emph{Results}: We started with a set of 1353 papers and after applying our inclusion/exclusion and quality evaluation criteria, we selected a final set of 17 primary studies. We provide both a quantitative analysis, and a qualitative evaluation of the motivations, types of tests automated and the effectiveness/efficiency reported by the selected studies. % \emph{Conclusions}: Based on the selected studies, allowing more realistic interactions is among the main motivations for adopting robotic mobile testing. The tests automated with the support of robots are usually system-level tests targeting stress, interface, and performance testing. More empirical evidence is needed for supporting the claimed benefits. Most of the surveyed work do not compare the effectiveness and efficiency of the proposed robotics-based approach against traditional automation techniques. We discuss the implications of our findings for researchers and practitioners, and outline a research agenda.

15:20
SCAS-AI: A Strategy to Semi-Automate the Initial Selection Task in Systematic Literature Reviews

ABSTRACT. Context: There are several initiatives to semi-automate the initial selection of studies task for Systematic Literature Reviews (SLR) to reduce effort and potential bias. Objective: We propose a strategy called SCAS-AI to semi-automate the initial selection task. This strategy improves the original SCAS strategy with Artificial Intelligence (AI) resources (fuzzy logic and genetic algorithm) for studies selection. Method: We evaluated the SCAS-AI strategy through a case study with SLRs in Software Engineering (SE). Results: In general, the SCAS- AI strategy improved the results achieved using the original SCAS strategy in reducing the effort of the initial selection task. The effort reduction applying SCAS-AI was 39.1%. In addition, the errors percentage was 0.3% for studies automatically excluded (false-negative – loss of evidence) and 3.3% for studies automatically included (false-positive – evidence later excluded during the full-text reading). Conclusion: The results show the potential of the investigated AI techniques to support the initial selection task for SLRs in SE.

15:45
A Mapping Study of Security Vulnerability Detection Approaches for Web Applications

ABSTRACT. For the last few decades, the number of security vulnerabilities has been increasing with the development of web applications. The domain of Web Applications is evolving day by day. As a result, many empirical studies have been carried out in the web application domain to address different security vulnerabilities. However, an analysis of existing studies is needed before developing new security vulnerability testing techniques. We perform a systematic mapping study documenting all the state-of-the-art empirical research and study in web application security vulnerability detection. The aim is to describe a roadmap for synthesizing the documented empirical research of security vulnerability detection in web applications. Existing research and literature have been reviewed using a systematic mapping study by constructing research questions for this study. This mapping study reports studies dating from 2001 to 2021.

16:00-16:30Coffee Break

Expomeloneras's Hall