CCGRID 2017: 17TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND GRID
PROGRAM FOR SUNDAY, MAY 14TH
Days:
next day
all days

View: session overviewtalk overview

08:30-10:30 Session 1B: CCGRID-Life 2017 (I)
Chair:
Dagmar Krefting (HTW - Berlin, Germany)
Location: Goya
08:30
Dagmar Krefting (HTW - Berlin, Germany)
CCGRID-LIFE Welcome
09:00
Estefanía Serrano (Universidad Carlos III, Spain)
Javier Garcia Blas (Carlos III University, Spain)
Jesus Carretero (Universidad Carlos III de Madrid, Spain)
Monica Abella (Universidad Carlos III de Madrid, Spain)
Medical Imaging Processing on a Big Data platform using Python: Experiences with Heterogeneous and Homogeneous Architectures

ABSTRACT. The apparition of new paradigms, programming models, and languages that include easier programmability and better performance turns the implementation of current scientific applications into a less time-consuming task than years ago. One significant example of this trend is the MapReduce programming model and its implementation using Spark. Nowadays, this programming model is mainly used for data analysis and machine learning applications, although it has been expanded to its usage in the HPC community. On the side of programming languages, Python has positioned itself as an alternative to other scientific programming languages, such as Matlab or Julia. In this work we explore the capabilities of Python and Apache Spark as partners in the implementation of the backprojection operator of a CT reconstruction application. We present two interesting approaches with two different types of architectures: a heterogeneous architecture including NVidia GPUs and a full performance CPU mode with the compatibility with C/C++ native source code. We experimentally demonstrate that current CPU-based implementations scale with the number of computational units.

09:30
Kirill Lysov (St. Petersburg State University, Russia)
Alexander Bogdanov (St. Petersburg State University, Russia)
Alexander Degtyarev (St. Petersburg State University, Russia)
Dmitriy Guschanskiy (St. Petersburg State University, Russia)
Ananieva Nataliya (Bekhterev Psychoneurological Research Institute, Russia)
Zalutskaya Nataliya (Bekhterev Psychoneurological Research Institute, Russia)
Neznanov Nikolay (Bekhterev Psychoneurological Research Institute, Russia)
Analog-Digital Approach in Human Brain Modeling

ABSTRACT. Many companies and institutions in their attempts construct decision-making system, face a bottleneck in performance of their systems. Training neural networks can take from several days to several weeks. The traditional approach suggests modification of modern systems and microcircuits as long as their performance reaches a permissible limit. A different approach, unconventional, looks for opportunities in computing inspired by the human brain, neuromorphic computing. The idea was proposed by the engineer Carver Mead in the 80s and suggests combining artificial neural networks with specialized microcircuits. The architecture of the microchip needs to reproduce the mechanisms of the human brain and to be a kind of hardware support for neural networks. Last decade is characterized by a sharp growth of interest in neuromorphic computing, human brain modeling and peculiarities of how it works during making decisions. This is evidenced by the launch of a large-scale research programs like DARPA SyNAPSE (USA) and the Human Brain Project (EU), the purpose of which is to build a microprocessor system, which resembles the human brain in functionality, size and energy consumption. Existing models of the brain even on powerful supercomputers require significant computation time and are not yet able to solve problems in real time. Since the human brain consists of two parts with different functions and different data processing principles, there is a very promising approach which suggests combining digital and analog systems into single one. In current collaboration we incorporate some results of study of activity of human brain as a base of building of hybrid computational system and foundation to the approach of running it.

10:00
Peter van 'T Hof (Leiden University Medical Center, Netherlands)
Hailiang Mei (Leiden University Medical Center, Netherlands)
Sander Bollen (Leiden University Medical Center, Netherlands)
Jeroen Laros (Leiden University Medical Center, Netherlands)
Wibowo Arindrarto (Leiden University Medical Center, Netherlands)
Szymon Kielbasa (Leiden University Medical Center, Netherlands)
Biopet: towards Scalable, Maintainable, User-friendly, Robust and Flexible NGS data analysis pipelines

ABSTRACT. Because of the rapid decreasing of sequencing cost, more research and clinical institutes are generating Next Generation Sequencing data at an increasing and impressive scale. University Medical Centers in the Netherlands are sequencing thousands patients a year each as part of their routine diagnosis. On the research front, the GoNL project and BIOS project coordinated by the BBMRI-NL consortium have sequenced 770 whole genome DNA samples and over 4000 RNA samples collected from a number of Dutch biobanks. In 2016, the deployment of Illumina X Ten sequencer at the Hartwig Medical Foundation provides a sequencing capacity of 18,000 whole genome DNA samples per year. Processing these petabyte scale datasets requires revolutionary thinking and solutions in the computing and storage infrastructure and the data analysis pipelines.

At Leiden University Medical Center, we have developed a GATK-Queue based open source pipeline framework – BIOPET (Bioinformatics Pipeline Execution Toolkit). We implemented all our commonly used NGS tools as Queue modules in the form of Scala classes. Together with those that are already supported in GATK-Queue like GATK variant-calling and Picard tools, we have a full set of NGS tools at our disposal as Scala classes that are further combined into pipeline functions. Besides meeting the various standard requirements for NGS pipelines such as reentrancy, the BIOPET framework also offers a list of advanced features, such as live debugging, test and meta-analysis frameworks and easy deployment. BIOPET framework can run on various types of HPC infrastructure through its DRMAA support, e.g., SGE, SLURM, PBS.

08:30-10:30 Session 1C: DBDM 2017 (I)
Chair:
Alfredo Cuzzocrea (ICAR-CNR and University of Calabria, Italy)
Location: Serrano
08:30
Tzuhsien Wu (Lawrence Berkeley National Lab, USA)
Jerry Chou (Tsing Hua University, Taiwan)
Norbert Podhorszkiz (TBA, Hungary)
Junmin Gu (Lawrence Berkeley National Lab, USA)
Yuan Tian (TBA, USA)
Scott Klasky (The University of Tennessee, USA)
Kesheng Wu (Lawrence Berkeley National Lab, USA)
Apply Block Index Technique to Scientific Data Analysis and I/O Systems

ABSTRACT. Scientific discoveries are increasingly relying on analysis of massive amounts of data. The ability to directly access the most relevant data records through query, without shifting through all of them becomes essential. However, scientific datasets are commonly stored on parallel file systems and I/O systems that are optimized for reading/writing large chunks of data, and many scientific datasets have spatial-temporal data similarity, such that the records with similar values often locate in a close proximity of each other. Therefore, our previous work started to investigate the benefit of using block range index technique for scientific datasets, which only records the value range of all the records in a data block. In this paper, we extend our work in several aspects. First, we implement and integrate our block index technique with the ADIOS I/O system. Second, we show our proposed method can be significantly better than the existing minmax and bitmaps indexing methods supported in ADIOS, and can also have comparable performance in the worst case. Third, we propose several techniques that can take advantage of the block index information to greatly reduce data retrieval time from query results. Fourth, we evaluate our approach using several real scientific datasets, and analyze the spatial-temporal data similarity characteristics in them. Through our study, we believe block index can be an effective indexing technique for scientific datasets with little implementation and operating overhead. It’s size is small enough for building the indexes on-the-fly, and yet its query information is sufficient for efficient data access.

09:00
Roberto De Virgilio (Università Roma, Italy)
Smart RDF Data storage in Graph Databases

ABSTRACT. Graph Database Management Systems (GDBMs) provide an effective and efficient solution to data storage in current scenarios where data are more and more connected, graph models are widely used, and systems need to scale to large data sets. In particular, the conversion of the persistent layer of an application from a RDF to a graph data store can be convenient but it is usually an hard task for database administrators. In this paper we propose a methodology to convert a RDF data store to a graph database by exploiting the ontology and the constraints of the source. We provide experimental results that show the feasibility of our solution and the efficiency of query answering over the target database.

09:30
Phyo Thandar Thant (Hokkaido University, Japan)
Courtney Powell (Hokkaido University, Japan)
Martin Schlueter (Hokkaido University, Japan)
Masaharu Munetomo (Hokkaido University, Japan)
A Level-Wise Load Balanced Scientific Workflow Execution Optimization using NSGA-II

ABSTRACT. Over the past decade, cloud computing has grown in popularity for the processing of scientific applications as a result of the scalability of the cloud and the ready availability of on-demand computing and storage resources. It is also a cost-effective alternative for scientific workflow executions with a pay-per-use paradigm. However, providing services with optimal performance at the lowest financial resource deployment cost is still challenging. Several fine-grained tasks are included in scientific workflow applications, and efficient execution of these tasks according to their processing dependency to minimize the overall makespan during workflow execution is an important research area. In this paper, a system for level-wise workflow makespan optimization and virtual machine deployment cost minimization for overall workflow optimization in cloud infrastructure is proposed. Further, balanced task clustering, to ensure load balancing in different virtual machine instances at each workflow level during workflow execution, is also considered. The system retrieves the necessary workflow information from a directed acyclic graph and uses the non-dominated sorting genetic algorithm II (NSGA-II) to carry out multiobjective optimization. Pareto front solutions obtained for makespan time and instance resource deployment cost for several scientific workflow applications verify the efficacy of our system.

10:00
Akash G. J (National Institute of Technology Calicut, India)
Ojus Thomas Lee (National Institute of Technology Calicut, India)
S.D Madhu Kumar (National Institute of Technology Calicut, India)
Priya Chandran (National Institute of Technology Calicut, India)
Alfredo Cuzzocrea (University of Trieste, Italy)
RAPID: A Fast Data Update Protocol in Erasure Coded Storage Systems for Big Data

ABSTRACT. Erasure codes are nowadays used extensively in distributed storage systems that handle big data, since they offer significant fault tolerance with low storage overhead. Even though erasure coded systems are space efficient, these involve higher network bandwidth and computational complexity in their operations. In this paper, we present RAPID, a protocol for fast data updates, which works by choosing a subset of code blocks for updates and adapts the strength of the subset based on the predicted number of failures. The proposal uses a prediction based heuristic in which the set of failures that may happen in near future is represented as a function of past failures. A hybrid protocol that uses both locking and buffering mechanisms is adopted in the solution to maintain the consistency on the data and code blocks updates. Our experimental results demonstrate improvement in the performance of data updates by 30% and the failure prediction mechanism proposed shows an accuracy of 70%.

08:30-10:30 Session 1D: SCRAMBL 2017
Chair:
Marco Frincu (West University of Timisoara, Romania)
Location: Velazquez
08:30
Francisco J. Clemente-Castelló (Universidad Jaume I, Spain)
Rafael Mayo Gual (Universidad Jaume I, Spain)
Juan Carlos Fernandez (Universidad Jaume I, Spain)
Cost Model And Analysis of Iterative MapReduce Applications for Hybrid Cloud Bursting

ABSTRACT. A popular and cost-effective way to deal with the increasing complexity of big data analytics is hybrid cloud bursting that leases temporary off-premise cloud resources to boost the overall capacity during peak utilization. The main challenge of hybrid cloud bursting is that the network link between the on-premise and the off-premise computational resources often exhibit high latency and low throughput ("weak link") compared to the links within the same data-center. This paper presents a cost analysis of the impact of such inter-cloud data transfers that need to pass over the weak links using novel data locality strategies to minimize the negative consequences. We focus our study on iterative MapReduce applications, which are a class of large-scale data intensive applications particularly popular on hybrid clouds. We run extensive experiments in a distributed, multi-VM setup and report multi-fold improvement over traditional approaches in terms of cost-effectiveness.

09:00
Olubisi Runsewe (University of Ottawa, Canada)
Nancy Samaan (University of Ottawa, Canada)
Cloud Resource Scaling for Big Data Streaming Applications Using A Layered Multi-dimensional Hidden Markov Model

ABSTRACT. Recent advancements in technology have led to a deluge of data that require real-time analysis with strict latency constraints. A major challenge, however, is determining the amount of resources required by big data stream processing applications in response to heterogeneous data sources, streaming events, unpredictable data volume and velocity changes. Over-provisioning of resources for peak loads can be wasteful while under-provisioning can have a huge impact on the performance of the streaming applications. The majority of research efforts on resource scaling in the cloud are investigated from the cloud provider's perspective, they focus on web applications and do not consider multiple resource bottlenecks. We aim at analyzing the resource scaling problem from a big data streaming application provider's point of view such that efficient scaling decisions can be made for future resource utilization. This paper proposes a Layered Multi-dimensional Hidden Markov Model (LMD-HMM) for facilitating the management of resource auto-scaling for big data streaming applications in the cloud. Our detailed experimental evaluation shows that LMD-HMM performs best with an accuracy of 98%, outperforming the single-layer hidden markov model.

09:30
Anca Vulpe (West University of Timisoara, Romania)
Marc Frincu (West University of Timisoara, Romania)
Scheduling Data Stream Jobs on Distributed Systems with Background Load

ABSTRACT. Cloud computing has is used by numerous applications tailored for on-demand execution on elastic resources. While most cloud based applications rely on virtualization, an emerging technology based on lightweight containers is starting to gain traction. While most research on job scheduling on clouds has focused on dedicated machines, the emergence and applicability of containers on a wider range of platforms including IoT, reopens the issue of scheduling on non-dedicated machines with high priority background load.

In this paper we address this problem by proposing a model and several heuristics for scheduling data stream jobs on containers running on machines with background load. We also address the issue of estimating the container parameters. The heuristics are tested and analyzed based on real-life traces.

10:30-11:00Coffee Break
11:00-12:30 Session 2B: CCGrid - Life 2017 (II)
Chair:
Dagmar Krefting (HTW - Berlin, Germany)
Location: Goya
11:00
Patricia Gonzalez (University of A Coruña, Spain)
Xoán C. Pardo (Universidade da Coruña, Spain)
David Rodríguez Penas (IIM CSIC, Spain)
Diego Teijeiro (University of A Coruña, Spain)
Doallo Ramón (University of A Coruña, Spain)
Julio Banga (IIM-CSIC, Spain)
Using the Cloud for parameter estimation problems: comparing Spark vs MPI with a case-study

ABSTRACT. Systems biology is an emerging approach focused in generating new knowledge about complex biological systems by combining experimental data with mathematical modeling and advanced computational techniques. Many problems in this field are extremely challenging and require substantial supercomputing resources to be solved. This is the case of parameter estimation in large-scale nonlinear dynamic systems biology models. Recently, Cloud Computing has emerged as a new paradigm for on-demand delivery of computing resources. However, scientific computing community has been quite hesitant in using the Cloud, simply because traditional programming models do not fit well with the new paradigm, and the earliest cloud programming models do not allow most scientific computations being efficiently run in the Cloud. In this paper we explore and compare two distributed computing models: the MPI (message-passing interface) model, that is high-performance oriented, and the Spark model, which is throughput oriented but outperforms other cloud programming solutions adding improved support for iterative algorithms through in-memory computing. The performance of a very well known metaheuristic, the Differential Evolution algorithm, has been thoroughly assessed using a challenging parameter estimation problem from the domain of computational systems biology. The experiments have been carried out both in a local cluster and in the Microsoft Azure public cloud, allowing for the performance evaluation in both infrastructures.

11:30
Michael Witt (Hochschule für Technik und Wirtschaft Berlin, Germany)
Christoph Jansen (Hochschule für Technik und Wirtschaft Berlin, Germany)
Dagmar Krefting (HTW - Berlin, Germany)
Achim Streit (Karlsruhe Institute of Technology, Germany)
Fine-grained Supervision and Restriction of Biomedical Applications in Linux Containers

ABSTRACT. Applications for data analysis of biomedical data are complex programs and often consist of multiple components. Re-usage of existing solutions from external code repositories or program libraries (e.g. MATLAB Central, EEGlab or PhysioNet) is common in algorithm development. To ease reproducibility and transfer of algorithms and required components into distributed infrastructures Linux containers can be used. Infrastructures can use Linux container execution to provide a generic processing pipeline for user submitted algorithms.

A thorough review of the applications and their components provided in containers is typically not available due to their complexity or restricted source code access. This results in an uncertainty about actions performed by diverse parts of the application during runtime.

In this paper we describe measures and a solution to secure the execution of a \emph{MATLAB}-based application for normalization of multidimensional biosignal recordings. The application and the required runtime environment are installed in a Docker-based container. This container is distributed alongside required data inside a OpenStack infrastructure. To secure the infrastructre a fine-grained restricted environment (sandbox) for the execution of the untrusted program using standard Linux-kernel interfaces is used. The rule set in our sandbox is defined on system call level. Filtering based on system calls is suited to prevent malicious actions as they typically require to interact with the operating system (e.g. by accessing the filesystem or network resources). With the restriction of our solution to use only standard Linux-kernel interfaces, it is suited for the given container-based environment, where applications are limited to the shared kernel capabilities.

Due to the low-level character of system call interaction with the operating system and the large amount of system calls issued by a complex framework as the MATLAB-runtime, the creation of an adequate rule set for the sandbox may become challenging. Therefore the presented solution includes a component that provides application monitoring based on issued system calls. This enables the user to collect data about system call interaction with the operating system. These data can afterwards be used to define the required rules for the application sandbox. Performance evaluation of the application execution time shows no significant impact by the resulting sandbox, while detailed monitoring may increase runtime up to over 420%.

12:00
Dagmar Krefting (HTW - Berlin, Germany)
CCGRID-LIFE Panel discussion
11:00-12:30 Session 2C: DBDM 2017 (II)
Chair:
Alfredo Cuzzocrea (ICAR-CNR and University of Calabria, Italy)
Location: Serrano
11:00
Alfredo Cuzzocrea (University of Trieste, Italy)
Rajkumar Buyya (The University of Melbourne, Australia)
Vincenzo Passanisi (TBA, Italy)
Giovanni Pilato (National Research Council of Italy, Italy)
MapReduce-based Algorithms for Managing Big RDF Graphs: State-of-the-Art Analysis, Paradigms, and Future Directions

ABSTRACT. Big RDF (Resource Description Framework) graphs, which populate the emerging Semantic Web, are the core data structure of the so-called Big Web Data, the “natural” transposition of Big Data on the Web. Managing big RDF graphs is gaining momentum, essentially due to the fact that this task represents the “baseline operation” of fortunate Web big data analytics. Here, it is required to access, manage and process large-scale, million-node (big) RDF graphs, thus dealing with severe spatio-temporal complexity challenges. A possible solution to this problem is represented by the so-called MapReduce-model-based algorithms for managing big RDF graphs, which try to exploit the computational power offered by the MapReduce processing model in order to tame the complexity above. In this so-depicted scientific context, this paper provides a critical survey on MapReduce-based algorithms for managing big RDF graphs, with analysis of state-of-the-art proposals, paradigms and trends, along with a comprehensive overview of future research trends in the investigated scientific area.

11:30
Divyashikha Sethia (Delhi Technological University, India)
Shalini Sheoran (Delhi Technological University, India)
Huzur Saran (Delhi Technological University, India)
Optimized MapFile based Storage of Small Files in Hadoop

ABSTRACT. Hadoop is an open source software based on MapReduce framework. The Hadoop Distributed File System (HDFS) performs well while storing and managing data sets of very large size. However, the performance of HDFS suffers while handling a large number of small files since they put a lot of burden on the NameNode of HDFS both in terms of memory and access time. To overcome these defects, we merge small files into a large file and store the merged file on HDFS. Generally, when small files are merged, variation in the size distribution of files is not taken into consideration. We propose a new algorithm OMSS (Optimized MapFile based Storage of Small files) which merges the small files into a large file based on the Worst fit strategy. The strategy helps in reducing internal fragmentation in data blocks, which in turn leads to fewer data blocks consumed for the same number of small files. Less number of data blocks mean fewer memory overheads at major nodes of Hadoop cluster and hence increased efficiency of data processing. Our experimental results indicate that the time to process data on HDFS containing unprocessed small files reduces significantly to 590s when MapFile is used and it reduces further to 440s when OMSS is used. OMSS as compared to MapFile merging algorithm has a reduction of 34.7% in memory requirements.

12:00
Ivan Mendez-Jimenez (CIEMAT, Spain)
Miguel Cardenas-Montes (CIEMAT, Spain)
Juan Jose Rodrıguez-Vazquez (CIEMAT, Spain)
Ignacio Sevilla Noarbe (CIEMAT, Spain)
Eusebio Sánchez Álvaro (CIEMAT, Spain)
David Alonso (TBA, Spain)
Miguel Angel Vega-Rodrıguez (University of Extremadura, Spain)
An Accuracy-Aware Implementation of Two-Point Three-Dimensional Correlation Function using Bin-Recycling Strategy on GPU

ABSTRACT. The analysis of scientific data, specially in different kinds of cosmological studies, has to deal with the increment in data volume. These studies include the calculation of correlation functions such as the Two-point Three-Dimensional Correlation Function. To get the final estimator value for these functions, it is necessary to construct histograms for storing large number counts. Histograms are a very common way of representing data and summarizing information in science. However, they have a high computational cost, which is worsened by the increase of the standard sample size. This increment leads directly to two problems: first of all, the large processing time and, secondly, the lack of accuracy of the result. Therefore, the implementations of correlation functions need to maintain high accuracy and affordable processing times. In order to reduce the high processing times, GPU computing is being widely used. In this work, the binrecycling strategy is implemented and evaluated in the Two-Point Three-Dimensional Correlation Function. We show that this implementation outperforms others which also correctly process a large number of galaxies. As a result of this work, an accuracy-aware implementation of the Two-Point Three-Dimensional Correlation Function on GPU is described and evaluated to ensure the correctness of the results.

12:30
Amir Haroun (TBA, France)
Ahmed Mostefaoui (UFC - UFR STGI, France)
Francois Dessables (TBA, France)
A Big Data Architecture for Automotive Applications: PSA Group Deployment Experience

ABSTRACT. Vehicles have become moving sensor platforms collecting huge volumes of data from their various embedded sensors. This data has a great value for automotive manufacturers and vehicles owners. Indeed, connected vehicles data can be used in a large broad of automotive services ranging from safety services to well-being services (e.g. fatigue detection). However, vehicle fleets send big volumes of data that traditional computing and storage approaches are not able to manage efficiently. In this paper, we present the experience of the PSA Group1 on leveraging big data in automotive context. We describe in depth the big data architecture deployed within the PSA Group and the underlying technologies/products used in each component.

11:00-12:30 Session 2D: Intercloud 2017
Chair:
Yuri Demchenko (University of Amsterdam, Netherlands)
Location: Velazquez
11:00
Yuri Demchenko (University of Amsterdam, Netherlands)
Fatih Turkmen (University of Amsterdam, Netherlands)
Mathias Slawik (TU Berlin, Service-centric Networking, Germany)
Defining Intercloud Security Framework and Architecture Components for Multi-Cloud Data Intensive Applications

ABSTRACT. This paper presents results of the ongoing development of the Intercloud Security Framework (ICSF), that is a part of Intercloud Architecture Framework (ICAF), and provides an architectural basis for building security infrastructure services for multi-cloud applications. The paper refers to general use case of the data intensive applications that indicate need for multi-cloud applications platforms that will require corresponding multi-cloud security services. The document presents analysis of the general multi-cloud use case that helps eliciting the general requirement to ICSF and identifying the security infrastructure functional components that would allow using distributed cloud based resources and data sets. The paper defines the main ICSF services and functional components, and explains importance of consistent implementation of the Security Services Lifecycle Management in cloud based applications. The paper provides overview of the cloud compliance standards and their role in cloud security. The paper refers to the security infrastructure development in the CYCLONE project that implements federated identify management, secure logging service, and multi-domain Attribute Based Access Control, security services lifecycle management and trust bootstrapping for virtualised cloud environment.

11:30
Uchechukwu Awada (University of St Andrews, UK)
Adam Barker (University of St Andrews, UK)
Improving Resource Efficiency of Container-instance Clusters on Clouds

ABSTRACT. Cloud computing providers such as Amazon and Google have recently begun offering container-instances, which provide an efficient route to application deployment within a lightweight, isolated and well-defined execution environment. Cloud providers currently offer Container Service Platforms (CSPs), which support the flexible orchestration of containerised applications.

Existing CSP frameworks do not offer any form of intelligent resource scheduling: applications are usually scheduled individually, rather than taking a holistic view of all registered applications and available resources in the cloud. This can result in increased execution times for applications, resource wastage through underutilised container-instances, and a reduction in the number of applications that can be deployed, given the available resources.

The research presented in this paper aims to extend the existing system by adding a cloud-based Container Management Service (CMS) framework that offers increased deployment density, scalability and resource efficiency. CMS provides additional functionalities for orchestrating containerised applications by joint optimisation of sets of containerised applications and resource pool in multiple (geographical distributed) cloud regions.

We evaluate CMS on a cloud-based CSP i.e., Amazon EC2 Container Management Service (ECS) and conducted extensive experiments using sets of CPU and Memory intensive containerised applications against the direct deployment strategy of Amazon ECS. The results show that CMS achieves up to 25% higher cluster utilisation and up to 70% reduction in execution times.

12:00
Andreas Reiter (Institute of Applied Information Processing and Communications (IAIK), Graz University of Technology, Austria)
Bernd Prünster (Institute of Applied Information Processing and Communications (IAIK), Graz University of Technology, Austria)
Thomas Zefferer (A-SIT Plus GmbH, Austria)
Hybrid Mobile Edge Computing: Unleashing the Full Potential of Edge Computing in Mobile Device Use Cases

ABSTRACT. Many different technologies fostering and supporting distributed and decentralized computing scenarios emerged recently. Edge computing provides the necessary on-demand computing power for Internet-of-Things (IoT) devices where it is needed. Computing power is moved closer to the consumer, with the effect of reducing latency and increasing fail-safety due to absent centralized structures. This is an enabler for applications requiring high-bandwidth uplinks and low latencies to computing units. In this paper, a new use case for edge computing is identified. Mobile devices can overcome their battery limitations and performance constraints by dynamically using the edge-computing-provided computational power. We call this new technology Hybrid Mobile Edge Computing. We present a general architecture and framework, which targets the mobile device use case of hybrid mobile edge computing, not only considering the improvement of performance and energy consumption but also providing means to protect user privacy, sensitive data and computations. The achieved results are backed by the results of our analysis, targeting the energy saving potentials and possible performance improvements.

12:30
Vineeth Ramesh (Freshdesk, India)
Geethapriya Ramakrishnan (Anna University, India)
Saswati Mukherjee (Anna University, India)
Multi-Objective Particle Swarm Optimization for VM Placement(MOPSO-VMP) in Multi-Cloud

ABSTRACT. The advancement in the field of cloud computing and the rise in demand for cloud resources has resulted in an increase in the number of IaaS (Infrastructure as a Service) providers. With this rising demand, soon a IaaS provider may not be able to accommodate all the VM (virtual machine) requests from its users on its own and this necessitated the adoption of multi-cloud. In a multi-cloud environment, a VM request can either be from an individual user or from a Cloud Service Provider (CSP) that suffers from resource limitation. In this setting, an efficient cloud brokering mechanism is required to optimally schedule the user\lq s VM requests across multiple clouds. Selecting an optimal mapping for each VM in a request to a CSP in multi-cloud is difficult because it needs to be fair to both the cloud user and all the CSPs participating in the multi-cloud environment. The existing cloud brokering mechanisms in a multi-cloud scenario target at optimizing only one specific QoS constraint. However, in reality, there are different users with varying QoS constraints like cost, time and distance. We propose a solution by making the broker adapt itself to different QoS preferences of each user and at the same time sustaining fairness across CSPs by allocating proportionately. There is a need for finding a trade-off in allocation between the fairness to CSPs and user preferred QoS requirements and we achieved this by implementing a Multi-Objective Particle Swarm Optimization for VM placement (MOPSO-VMP) to find the optimal trade-off solution.

12:30-14:00Lunch Break
14:00-16:00 Session 3B: TAPEMS (I)
Chairs:
Juan C. Díaz-Martín (Universidad de Extremadura, Spain)
Juan Antonio Rico Gallego (University of Extremadura, Spain)
Location: Goya
14:00
Sandra Mendez (Leibniz Supercomputing Centre (LRZ), Germany)
Dolores Rexachs (Computer Architecture and Operating Systems Department - University Autonoma of Barcelona, Spain)
Emilo Luque (Computer Architecture and Operating Systems Department - University Autonoma of Barcelona - Spain, Spain)
Analyzing the Parallel I/O Severity of MPI Applications

ABSTRACT. Performance evaluation of parallel application plays an important role on High Performance Computing (HPC). In HPC is essential to have a measurements about the utilization of resources. This is also applied to parallel I/O that requires to understand the I/O pattern of the application and to have knowledge of the performance capacity of the HPC I/O system.

In this paper, we present a methodology to evaluate the I/O performance of parallel application based on the I/O severity degree. We define the I/O severity concept taking into account the I/O requirements of a parallel application, the mapping of I/O processes and the configuration of the I/O subsystem. Requirements are expressed in units denominated I/O phases that are defined using the temporal and spatial pattern of different files of the application. Our approach is applied to the I/O kernels of scientific applications such as S3DIO, FLASH-IO and BT-IO on the SuperMUC supercomputer. Experimental results show that our methodology allows us to identify if a parallel application is limited by the I/O subsystem and identifying possible root cause of the I/O problems.

14:30
Jesús M. Álvarez-Llorente (Universidad de Extremadura, Spain)
Juan C. Díaz-Martín (Universidad de Extremadura, Spain)
Juan A. Rico-Gallego (Universidad de Extremadura, Spain)
Formal modeling and performance evaluation of a run-time rank remapping technique in Broadcast, Allgather and Allreduce MPI collective operations

ABSTRACT. MPI collective operations are implemented using a variety of algorithms which define different communication patterns between the ranks involved in the operation. The performance of these algorithms in multi-core clusters highly depends on the mapping of the ranks to the system processors due to the uneven capabilities of shared memory and network channels. The hierarchical design of these algorithms contributes to use optimally the communication channels. Nevertheless, common hierarchical algorithms have shown themselves, for some collectives as allgather, inefficient and even impracticable. This paper analyzed the reasons for that and works out an alternate approach through performance modeling. Such approach departs from the a priori knowledge of a regular mapping as round-robin or sequential, and keeping the original algorithm unmodified, switches the rank to process mapping to another regular mapping at run time that reduces network traffic. The methodology is evaluated with three collectives and their underlying algorithms, showing speedups of up to 5x in the Binomial Tree or 3x in Ring algorithms compared to unfavorable mappings.

15:00
Sina Mahmoodi Khorandi (Iran University of Science and Technology, Iran)
Siavash Ghiasvand (TU-Dresden, Germany)
Mohsen Sharifi (Iran University of Science and Technology, Iran)
Reducing Load Imbalance of Virtual Clusters via Reconfiguration and Adaptive Job Scheduling

ABSTRACT. Application composition model promises to enable full utilization of extreme scale high performance computing (HPC) systems (aka ExaScale), wherein each composite application consists of several communicative components each of which is tightly coupled with a possibly especial software stack of its own. Heterogeneity of software stacks has encouraged the use of system virtualization technology to isolate executions of components on common shared resources. On the other hand, some components of composite applications, called loosely-coupled components, consist of a set of loosely-coupled CPU-intensive jobs. A given loosely-coupled component runs on a set of virtual machines (VMs), which in turn are distributed on some physical machines. A job scheduler has to assign/reassign jobs to VMs to adaptively cater for the resource provisioning and freeing of newly arrived jobs, and terminated jobs, respectively. Since VMs of several components may share common resources of a certain physical machine, and given that the job scheduler of each component is totally unaware of virtualization, the job scheduler of a loosely-coupled component does not know the status of physical machines. So, reconfiguration of the job scheduler’s parameters at runtime can give the true state of the physical machine based on which, the job scheduler can assign/reassign jobs to more suitable VMs. This paper presents a combination of ASSIGN-ROUTE online job scheduling and a reconfiguration technique allowing a given loosely-coupled component to balance its resource usage load, and thus improve the scaled execution of its loosely-coupled jobs. We prove that this technique reaches the load imbalance which is near to the optimal load imbalance for online deterministic unrelated parallel machine makespan minimization scheduling. We also show that the results of our experiments, support the theoretical achievements.

15:30
Shweta Jha (University Of Houston, USA)
Edgar Gabriel (University of Houston, USA)
Performance Models for Communication in Collective I/O Operations

ABSTRACT. Many large scale scientific applications spend a significant amount of time in file I/O operations. Collective I/O APIs provide higher level abstractions of I/O across a group of processes. They often reduce the time spent in file I/O by reorganizing data across processes to match the layout of the data on the file system. In this paper we present performance models for the communication occurring in collective write operations as a first step towards developing a full and accurate mode of collective I/O operations. The models derived in this paper take both the application data decomposition and the file domain partitioning strategies used by the I/O library into account. We discuss properties of our performance models and demonstrate using LogGP parameters derived on multiple platforms their impact on the performance of collective I/O operations. The paper further provides comparison to actual measurements performed on an InfiniBand cluster. Our results indicate a good overall match between predicted and observed behavior.

14:00-16:00 Session 3C: EBDMA (I)
Chairs:
Manish Parashar (State University of New Jersey University, USA)
Anna Queralt (Barcelona Supercomputing Center, Spain)
Location: Velazquez
14:00
Rosa M. Badia (Barcelona Supercomputing Center, Spain)
Task-based programming model as an alternative for Big Data and Analytics

ABSTRACT. TBA

15:00
Albino Altomare (ICAR-CNR, Italy)
Eugenio Cesario (ICAR-CNR, Italy)
A Data-driven Approach based on Auto-Regressive Models for Energy-Efficient Clouds

ABSTRACT. The steadily increasing success of Cloud Computing is causing a huge rise in its electrical power consumption, contributing to the greenhouse effect and the global warming. One of the most common key strategies to reduce the power consumption of data centers is the consolidation of virtual machines, whose effectiveness strongly depends on a reliable forecasting of future computational resource needs. In fact, servers are typically configured to handle peak workload conditions even if they are often under-utilized, that results in a wastefulness of resources and inefficient energy consumption. Motivated by these issues, this paper describes a data-driven approach based on auto-regressive models to dynamically forecast virtual machine workloads, for energy-aware allocations of virtual machines on Cloud physical nodes. Virtual machine migrations across physical servers are periodically done on the basis of the estimated virtual machine demands, by minimizing the number of active servers. Experimental results show encouraging benefits in terms of energy saving, while satisfying performance constraints and service level agreement established with users.

15:30
Yacine Taleb (INRIA, France)
Shadi Ibrahim (Inria, Rennes Bretagne Atlantique Research Center, France)
Gabriel Antoniu (Inria, France)
Toni Cortes (Barcelona Supercomputing Center, Spain)
An Empirical Evaluation of How The Network Impacts The Performance and Energy Efficiency in RAMCloud

ABSTRACT. In-memory storage systems emerged as a de-facto building block for today’s large scale Web architectures and BigData processing frameworks. Many research and engineering efforts have been dedicated to improve their performance and memory efficiency. More recently, such systems can leverage high-performance networks, e.g., Infiniband. To be able toleverage these systems it is essential to understand the trade-offs induced by the use of high-performance networks. This paper aims to provide empirical evidence of the impact of client’s location on the performance and energy consumption of in-memory storage systems. Through a study carried on RAMCloud, we focus on two settings: 1) clients are collocated within the same network as the storage servers (with Infiniband interconnects); 2) clients access the servers from a remote network, through TCP/IP. We compare and discuss aspects related to scalability and power consumption for these two scenarios which correspond to different deployment models for applications making use of in-memory cloud storage systems

14:00-16:00 Session 3D: WACC 2017 (I)
Chairs:
Ignacio Blanquer (Universitat Politècnica de València, Spain)
Roy Campbell (University of Illinois at Urbana-Champaign, USA)
Wagner Meira Jr. (Universidade Federal do Minas Gerais, Brazil)
Location: Serrano
14:00
Colonel Ryan Thomas (European Office of Aerospace Research and Development (AFOSR/EOARD), USA)
US Air Force Interests and Directions in Cyber Security

ABSTRACT. KEYNOTE SPEAKER

The basic science arm of the Air Force Research Laboratory, the Air Force Office of Scientific Research (AFOSR), funds basic research into the science of security. The goal of this research portfolio is to enable the development safe, secure and dependable information systems. With the widespread application of cloud services in the commercial and military domains, many of these results have applications to cloud, big data, and distributed computing problems. This talk will discuss the cyber security and international research interests of AFOSR. Highlights of AFOSR sponsored research will be presented, including new work in areas such as interactive and automated theorem proving, behavior based access control, big data policy monitoring, and safe machine learning techniques. Additionally, opportunities will be shared for working on research with the US Air Force through collaboration, travel, and funded research.

14:30
Carlo Di Giulio (University of Illinois at Urbana-Champaign, USA)
Read Sprabery (University of Illinois at Urbana-Champaign, USA)
Charles Kamhoua (Air Force Research Laboratory, USA)
Kevin Kwiat (Air Force Research Laboratory, USA)
Roy Campbell (University of Illinois at Urbana-Champaign, USA)
Masooda Bashir (University of Illinois at Urbana-Champaign, USA)
IT Security and Privacy Standards in Comparison Improving FedRAMP Authorization for Cloud Service Providers

ABSTRACT. To demonstrate compliance with privacy and security principles, information technology (IT) service providers often rely on security standards and certifications. However, the appearance of new service models such as cloud computing has brought new threats to information assurance, weakening the protection that existing standards can provide. In this study, we analyze four highly regarded IT security standards used to assess, improve, and demonstrate information systems assurance and cloud security. ISO/IEC 27001, SOC 2, C5, and FedRAMP are standards adopted worldwide and constantly updated and improved since the first release of ISO in 2005. We examine their adequacy in addressing current threats to cloud security, and provide an overview of the evolution over the years of their ability to cope with threats and vulnerabilities. By comparing the standards alongside each other, we investigate their complementarity, their redundancies, and the level of protection they offer to information stored in cloud systems. We unveil vulnerabilities left unaddressed in the four frameworks, thus questioning the necessity of multiple standards to assess cloud assurance. We suggest necessary improvements to meet the security requirements made indispensable by the current threat landscape.

14:50
Eugenio Gianniti (Politecnico di Milano, Italy)
Danilo Ardagna (Politecnico di Milano, Italy)
Michele Ciavotta (Politecnico di Milano, Italy)
Mauro Passacantando (Università di Pisa, Italy)
A Game-Theoretic Approach for Runtime Capacity Allocation in MapReduce

ABSTRACT. Nowadays many companies have available large amounts of raw, unstructured data. Among Big Data enabling technologies, a central place is held by the MapReduce framework and, in particular, by its open source implementation, Apache Hadoop. For cost effectiveness considerations, a common approach entails sharing server clusters among multiple users. The underlying infrastructure should provide every user with a fair share of computational resources, ensuring that service level agreements (SLAs) are met and avoiding wastes. In this paper we consider mathematical models for the optimal allocation of computational resources in a Hadoop 2.x cluster with the aim to develop new capacity allocation techniques that guarantee better performance in shared data centers. Our goal is to get a substantial reduction of power consumption while respecting the deadlines stated in the SLAs and avoiding penalties associated with job rejections. The core of this approach is a distributed algorithm for runtime capacity allocation, based on Game Theory models and techniques, that mimics the Map- Reduce dynamics by means of interacting players, namely the central Resource Manager and Class Managers.

15:10
Carlos De Alfonso (Universidad Politecnica de Valencia, Spain)
Ignacio Blanquer (UPV, Spain)
Germán Moltó (Universidad Politécnica de Valencia, Spain)
Miguel Caballer (Universidad Politécnica de Valencia, Spain)
Automatic Consolidation of Virtual Machines in On-Premises Cloud Platforms

ABSTRACT. After a sequence of creation and destruction of virtual machines (VMs) in an on-premises Cloud computing platform, the scheduling decisions to host the VMs are far from being optimal and the fragmentation of the physical resources may impede the platform to host some VMs despite the free available virtualization resources. This paper describes a Virtual Machine Consolidation Agent that addresses this problem by analyzing the distribution of the VMs in the virtualization platform to migrate some of them among hosts, in order to defragment the physical resources and to enhance the efficiency on their usage. The agent has been validated in a production platform, where it is capable of minimizing the number of servers needed to host the VMs. The algorithms achieve near-optimal values at a very reduced computational cost, thus making it suitable for production platforms.

15:30
Tania Basso (UNICAMP, Brazil)
Regina Moraes (UNICAMP, Brazil)
Nuno Antunes (University of Coimbra, Portugal)
Marco Vieira (University of Coimbra, Portugal)
Walter Santos (Federal University of Minas Gerais, Brazil)
Wagner Meira (Federal University of Minas Gerais, Brazil)
PRIVAaaS: privacy approach for a distributed cloud-based data analytics platforms

ABSTRACT. Assuring data privacy is a key challenge that is exacerbated by Big Data storage and analytics processing requirements. Big Data and Cloud Computing are inseparable allowing the users to access data from any device, making data privacy essential as the data sets are exposed through the web. Organizations care about data privacy as it directly affects the confidence that clients have that their personal data are safe. This paper presents a data privacy approach - PRIVAaaS, which was integrated to the LEMONADE Web- based platform, developed to compose ETL and Machine Learning workflows. The 3-level approach of PRIVAaaS, based on data anonymization policies, is implemented in a software toolkit that provides a set of libraries and tools which allows controlling and reducing data leakage in the context of big data processing.

16:00-16:30Coffee Break
16:30-18:00 Session 4B: TAPEMS (II)
Chairs:
Juan C. Díaz-Martín (Universidad de Extremadura, Spain)
Juan Antonio Rico Gallego (University of Extremadura, Spain)
Location: Goya
16:30
Michael Wagner (Barcelona Supercomputing Center (BSC), Spain)
Andreas Knüpfer (ZIH, TU Dresden, Germany)
Automatic Adaption of the Sampling Frequency for Detailed Performance Analysis

ABSTRACT. One of the most urgent challenges in event based performance analysis is the enormous amount of collected data. Combining event tracing and periodic sampling has been a successful approach to allow a detailed event-based recording of MPI communication and a coarse recording of the remaining application with periodic sampling. In this paper, we present a novel approach to automatically adapt the sampling frequency during runtime to the given amount of buffer space, releasing users to find an appropriate sampling frequency themselves. This way, the entire measurement can be kept within a single memory buffer, which avoids disruptive intermediate memory buffer flushes, excessive data volumes, and measurement delays due to slow file system interaction. We describe our approach to sort and store samples based on their order of occurrence in an hierarchical array based on powers of two. Furthermore, we evaluate the feasibility as well as the overhead of the approach with the prototype implementation OTFX based on the Open Trace Format 2, a state-of-the-art Open Source event trace library used by the performance analysis tools Vampir, Scalasca, and Tau.

17:00
Gary Lawson (Old Dominion University, USA)
Masha Sosonkina (Old Dominion University, USA)
Tal Ezer (Old Dominion University, USA)
Yuzhong Shen (Old Dominion University, USA)
Empirical Mode Decomposition for Modeling of Parallel Applications on Intel Xeon Phi Processors

ABSTRACT. For modern parallel applications, modeling their general execution characteristics, such as power and time, is difficult due to a great many factors affecting software-hardware interactions, which is also exacerbated by the dearth of measuring and monitoring tools for novel architectures, such as Intel Xeon Phi processors. To address this modeling challenge, the present work proposes to employ the Empirical Mode Decomposition (EMD) method to describe an execution as a series of modes culminating in a single residual trend, for which, in its turn, a model equation is obtained as a non-linear fit. As outcome, an overall energy consumption may be predicted using this model. A real-world quantum-chemistry application GAMESS and a molecular-dynamics proxy application CoMD were considers in the experiments. Their results demonstrate that the energy modeled ranged within 10–30% of the measured energy, depending on the length of execution.

17:30
Edson Florez (Universidad Industrial de Santander, Colombia)
Carlos Jaime Barrios Hernandez (Universidad Industrial de Santander, Colombia)
Joseph Emeras (University of Luxembourg, Luxembourg)
Johnatan E. Pecero Sanchez (University of Luxembourg, Luxembourg)
Energy model for low-power cluster

ABSTRACT. Energy efficiency in high performance computing (HPC) systems is a relevant issue nowadays, which is approached from multiple edges and components (network, I/O, resource management, etc). HPC industry turned its focus towards embedded and low-power computational infrastructures (of RISC architecture processors) to improve energy efficiency, therefore, we use an ARM-based cluster, known as millicluster, designed to achieve high energy efficiency with low power. We provide a model for energy consumption estimation based on experimental data, obtained of measurements performed during a benchmarking process that represents a real-world workload, such as scientific computing algorithms of artificial intelligence. The energy model enables power prediction of tasks in low-power nodes with high accuracy, and its implementation in a job scheduling algorithm of HPC, facilitates the optimization of energy consumption and performance metrics at the same time.

18:00
Misikir Eyob Gebrehiwot (Aalto University, Finland)
Samuli Aalto (Aalto University, Finland)
Pasi Lassila (Aalto University, Finland)
Near-optimal policies for energy-aware task assignment in server farms

ABSTRACT. Rising energy costs and the push for green computing have inspired a lot of research effort towards energy efficient computing. Incorporating low energy sleep states in server farms is one of the proposed solutions. This paper studies the trade-off between energy and performance that is inherent in such solutions using the popular cost metric Energy- Response-time-Weighted-Sum (ERWS). We apply the Markov Decision Process (MDP) theory to the task assignment problem, and derive a near-optimal dynamic task assignment policy for minimizing the ERWS cost metric. Furthermore, we consider a performance constrained energy minimization problem, and provide an algorithm that builds a dynamic task assignment policy by choosing the right energy weight value for the ERWS cost metric. We also show that the resulting task assignment policy behaves like a modified version of the Join the Shortest Queue (JSQ), having a near-optimal performance by minimizing energy consumption while still obeying response time constraint.

16:30-18:00 Session 4C: EBDMA (II)
Chairs:
Manish Parashar (State University of New Jersey University, USA)
Anna Queralt (Barcelona Supercomputing Center, Spain)
Location: Velazquez
16:30
Shweta Salaria (Tokyo Institute of Technology, Japan)
Kevin Brown (Tokyo Institute of Technology, Japan)
Hideyuki Jitsumoto (Tokyo Institute of Technology, Japan)
Satoshi Matsuoka (Tokyo Institute of Technology, Japan)
Evaluation of HPC-Big Data Applications Using Cloud Platforms

ABSTRACT. The path to HPC-Big Data convergence has resulted in numerous researches that demonstrate the performance trade-off between running applications on supercomputers and cloud platforms. Previous studies typically focus either on scientific HPC benchmarks or previous cloud configurations, failing to consider all the new opportunities offered by current cloud offerings. We present a comparative study of the performance of representative big data benchmarks, or "Big Data Ogres", and HPC benchmarks running on supercomputer and cloud. Our work distinguishes itself from previous studies in a way that we explore the latest generation of compute-optimized Amazon Elastic Compute Cloud instances, C4 for our experimentation on cloud. Our results reveal that Amazon C4 instances with increased compute performance and low variability in results make EC2-based cluster feasible for scientific computing and its applications in simulations, modeling and analysis.

17:00
Ovidiu Cristian Marcu (Inria Rennes Bretagne Atlantique, France)
Radu Tudoran (Huawei Research Germany, Germany)
Bogdan Nicolae (Huawei Research Germany, Germany)
Alexandru Costan (Inria Rennes Bretagne Atlantique, France)
Gabriel Antoniu (Inria Rennes Bretagne Atlantique, France)
Maria Perez (Universidad Politecnica de Madrid, Spain)
Exploring Shared State in Key-Value Store for Window-Based Multi-Pattern Streaming Analytics

ABSTRACT. We are now witnessing an unprecedented growth of data that needs to be processed at always increasing rates in order to extract valuable insights. Big Data streaming analytics tools have been developed to cope with the online dimension of data processing: they enable real-time handling of live data sources by means of stateful aggregations (operators). Current state-of-art frameworks (e.g. Apache Flink) enable each operator to work in isolation by creating data copies, at the expense of increased memory utilization. In this paper, we explore the feasibility of deduplication techniques to address the challenge of reducing memory footprint for window-based stream processing without significant impact on performance. We design a deduplication method specifically for window-based operators that rely on key-value stores to hold a shared state. We experiment with a synthetically generated workload while considering several deduplication scenarios and based on the results, we identify several potential areas of improvement. Our key finding is that more fine-grained interactions between streaming engines and (key-value) stores need to be designed in order to better respond to scenarios that have to overcome memory scarcity.

17:30
Alessandro D'Anca (CMCC Foundation, Italy)
Cosimo Palazzo (CMCC Foundation, Italy)
Donatello Elia (CMCC Foundation, Italy)
Sandro Fiore (CMCC Foundation, Italy)
Ioannis Bistinas (University of Reading, UK)
Kristin Böttcher (Finnish Environment Institute Remote Sensing, Finland)
Victoria Bennett (TBA, UK)
Giovanni Aloisio (University of Salento, Italy)
On the Use of In-Memory Analytics Workflows to Compute eScience Indicators from Large Climate Datasets

ABSTRACT. The need to apply complex algorithms on large volumes of data is boosting the development of technological solutions able to satisfy big data analytics needs in Cloud and HPC environments. In this context Ophidia represents a big data analytics framework for eScience offering a cross-domain solution for managing scientific, multi-dimensional data. It also exploits an in-memory-based distributed data storage and provides support for the submission of complex workflows by means of various interfaces compliant to well-known standards. This paper presents some applications of Ophidia for the computation of climate indicators defined in the CLIPC project, the WPS interface used for the submission and the workflow based approach employed.

16:30-18:00 Session 4D: WACC 2017 (II)
Chairs:
Ignacio Blanquer (Universitat Politècnica de València, Spain)
Roy Campbell (University of Illinois at Urbana-Champaign, USA)
Wagner Meira Jr. (Universidade Federal do Minas Gerais, Brazil)
Location: Serrano
16:30
Paulo Esteves-Veríssimo (University of Luxembourg, Luxembourg)
Assured Cloud Computing: are we there yet?

ABSTRACT. KEYNOTE SPEAKER

17:00
Rafael Pires (University of Neuchatel, Switzerland)
Daniel Gavril (Alexandru Ioan Cuza University of Iasi, Romania)
Pascal Felber (University of Neuchatel, Switzerland)
Emanuel Onica (Alexandru Ioan Cuza University of Iasi, Romania)
Marcelo Pasin (Université de Neuchâtel, Switzerland)
A lightweight MapReduce framework for secure processing with SGX

ABSTRACT. MapReduce is a programming model used extensively for parallel data processing in distributed environments. A wide range of algorithms were implemented using MapReduce, from simple tasks like sorting and searching up to complex clustering and machine learning operations. Many of these implementations are part of services externalized to cloud infrastructures. Over the past years, however, many concerns have been raised regarding the security guarantees offered in such environments. Some solutions relying on cryptography were proposed for countering threats but these typically imply a high computational overhead. Intel, the largest manufacturer of commodity CPUs, recently introduced SGX (software guard extensions), a set of hardware instructions that support execution of code in an isolated secure environment. In this paper, we explore the use of Intel SGX for providing privacy guarantees for MapReduce operations, and based on our evaluation we conclude that it represents a viable alternative to a cryptographic mechanism. We present results based on the widely used k-means clustering algorithm, but our implementation can be generalized to other applications that can be expressed using MapReduce model.

17:30
Philipp Stephanow (Fraunhofer AISEC, Germany)
Christian Banse (Fraunhofer AISEC, Germany)
Evaluating the performance of continuous test-based cloud service certification

ABSTRACT. Continuous test-based cloud certification uses tests to automatically and repeatedly evaluate whether a cloud service satisfies customer requirements over time. However, inaccurate tests can decrease customers' trust in test results and can lead to providers disputing results of test-based certification techniques. In this paper, we propose an approach how to evaluate the performance of test-based cloud certification techniques. Our method allows to infer conclusions about the general performance of test-based techniques, compare alternative techniques, and compare alternative configurations of test-based techniques. We present experimental results on how we used our approach to evaluate and compare exemplary test-based techniques which support the certification of requirements related to security, reliability and availability.

18:00
Amin Nezarat (PNU University, Iran)
A game theoretic method for VM-to-hypervisor attacks detection in cloud environment

ABSTRACT. Cloud computing is a pool of scalable virtual resources serving a large number of users who pay fees depending on the extent of utilized service. From payment perspective, cloud is like electricity and water as people who use more of this shared pool should pay larger fees. Cloud computing involves a diverse set of technologies including networking, virtualization, transaction scheduling, etc. so it is vulnerable to a wide range of security threats. Some of the most important security issues threatening the cloud computing systems originate from virtualization technology, as it constitutes the main body and basis of these systems. The most important virtualization-based security threats include VM side-channel, VMEscape and Rootkit attacks. The previous works on the subject of virtualization security rely on hardware approaches such as the use of firewalls, which are expensive, the use of schedulers to control the side channels along with noise injection, which impose high overhead, or the use of agents to collect information and send them back to a central intrusion detection system, which itself can become the target of attacker. In the method presented in this paper, a group of mobile agents act as the sensors of invalid actions in the cloud environment. They start a non-cooperative game with the suspected attacker, and then calculate the Nash equilibrium value and utility so as to differentiate an attack from legitimate requests and determine the severity of attack and its point of origin. The simulation results show that this method can detect the attacks with 86% accuracy. The use of mobile agents and their trainability feature has led to reduced system overhead and accelerated detection process.