PREVAIL 2021: IBM CONFERENCE ON PERFORMANCE ENGINEERING, RESILIENCE, SECURITY, SITE RELIABILITY ENGINEERING (SRE) AND TESTING
PROGRAM FOR WEDNESDAY, OCTOBER 20TH
Days:
previous day
next day
all days

View: session overviewtalk overview

00:00-01:00 Session 16A
Location: Channel #A
00:00
Know what is ROSA ? Openshift on AWS – unlock greater possibilities
PRESENTER: Amrita Maitra

ABSTRACT. What you will teach Talking about cloud, the names that pop in our minds are AWS, Azure, Google, IBM and many more, having great features. In cloud, Kubernetes is an orchestration platform for containers and Openshift is the advanced and better version of it. But have you wondered about the combination that could exist like AWS and Openshift or Azure and Openshift? In today’s session we will talk about ROSA that is Redhat Openshift on AWS. The underlying infrastructure is from AWS on top of which the orchestration technology is Openshift This is a great combination for application workloads to run in cloud. We will discuss about the architecture, benefits of ROSA, sample source and target patterns to migrate apps to ROSA and some of the integration tools with ROSA.

Expected Outcome The audience will learn what is ROSA , why it is used and how is it positioned in the market today. Session type – Learning module Delivery Method – lecture, storytelling, ppt Sample customer situation Our client is a leading financial service group in Asia with presence in 18 markets and their Business and Transformation objectives were –  Best in Class Customer Experience – Set Bar at New Level, Redefine Customer Journey  Ecosystem to Acquire Customer at Scale  Bring power to predict through Data and AI  Efficient Operations  Re-Architect & Modernize Core: Digital to Core  Agile & Learning Organization - Start up culture  Hybrid Multi Cloud ( Resilient, Modular & Scalable ) + SRE ( Site Reliability Engineering)  Multi location Delivery

All the business and transformation objectives were achieved with a solution built using Redhat Openshift on AWS. We will explain in detail how these goals are achieved.

Biography – Amrita Maitra Amrita Maitra is a certified Application Architect specializing in migration of client workloads to AWS. She is an expert in .Net technologies and extensively uses Design thinking in her deliverables. She has experience of 12 years and has worked with clients from various sectors but majorly Finance domain. For any queries, please reach out to https://in.linkedin.com/in/amrita-maitra

Amitabh Mohanty Amitabh Mohanty is an AWS certified solution Architect. He Specialises in migrating applications from On-Premises environment to the Hybrid cloud infrastructure. He is also having expertise in AWS DevOps offerings and its related technologies. He is proficient in analysing and translating business requirements to technical requirements and architecture. For queries please reach out to : https://www.linkedin.com/in/amitabh-mohanty-a2195b1a7

00:00-01:00 Session 16B
Location: Channel #B
00:00
Performance Testing of Chatbot Applications

ABSTRACT. Nowadays most of the applications use transactional chatbots and AI system for easy usage and quick solutions. But what if in this trend, chatbots fail to meet user expectations. For example, in a ecommerce application, a customer’s billing gets stuck in between and he leads to Bot for assistance. In this scenario, if the bot does not get a response in fraction of seconds, it might trigger his/her rage. There comes the need of performance testing. It is not just to measure whether chatbot can sustain its output among many concurrent users, but it’s also important to measure response time for an individual user. Chatbot is primarily used to smoothen user experience. Hence, to ameliorate its user experience becomes vital for any organization. This paper elaborates the performance testing of a chatbot application.

01:00-02:00 Session 17A
Location: Channel #A
01:00
Secure boot for Zero Trust environment
PRESENTER: Samvedna Jha

ABSTRACT. There are several aspects of Zero trust environment. One of them which is being discussed in the proposal is about Secure boot. The concern we are discussing here is not about applications getting accessed through public network. The product here is deployed in the customer private network. For such applications,  we want to ensure the OS or application that boots is verified to be from authentic source, for our case IBM. The session will share details on secure boot feature of management software of IBM Power, Intel x86 and Linux KVM environments. How these environments provide the secure boot for kernels and what technologies are used to implement it.

01:00-02:00 Session 17B
Location: Channel #B
01:00
Chaos for Operational Acceptance at Enterprise Scale Deployments
PRESENTER: Nagaraj Chinni

ABSTRACT. Operational Acceptance (OA) Test is full of challenges right from building the test cases to executing them. In a complex application which span across networks and platforms, it becomes more tedious to make sure the operational acceptance tests are executed effectively.

In OA, Framework, Tools, Methods play a pivotal role for a successful test. Through Chaos Engineering for an Enterprise scale deployments spanning across multiple availability zones and sites following components have to be brought under the OA for effective testing for resiliency/availability. 1. Peripheral services (GLB, Firewall, Proxy, etc) 2. Network 3. Platform (VMs, Container Platforms) 4. Component (app server, database, etc) 5. External third-party Integrations

Chaos Engineering is a powerful practice that is already changing how software is designed and engineered at some of the largest-scale operations in the world. Where other practices address velocity and flexibility, Chaos specifically tackles systemic uncertainty in these distributed systems. The Principles of Chaos provide confidence to innovate quickly at massive scales and give customers the high-quality experiences they deserve. Chaos Engineering can come as a powerful and handy tool for OAT.

Advances in large-scale, distributed software systems are changing the game for software engineering. As an industry, we are quick to adopt practices that increase flexibility of development and velocity of deployment. A question question follows on the heels of these benefits: How much confidence we can have in the complex systems that we put into production?

In this session we will bring the solution for this complex OAT through Chaos Engineering at all layers and more precisely on the prevalent architecture and practices like Microservices Architecture and Cloud computing which have changed our IT landscape in recent times.

02:00-03:00 Session 18A
Location: Channel #A
02:00
Secure management of Cloud Pak foundational services
PRESENTER: Yanni Zhang

ABSTRACT. IBM Cloud Pak platform includes a common set of foundational services called Bedrock services that provides modularity, ease of deployment, and unified user experiences across Cloud Pak capabilities. As the foundational layer for all Cloud Paks, Bedrock must have a sound security framework to ensure that the services are deployed and managed securely. Bedrock extends open source based OLM (Operator Lifecycle Manager) framework with enhanced security to deploy and manage the lifecycle of operators and operands (services deployed by operators). This presentation will cover the permissions and scopes, workload isolation, secret sharing, certificate management, RBAC, etc within Bedrock. You will also learn the best practices, reusable patterns and mechanisms to build security into your operators and services.

02:00-03:00 Session 18B
02:00
Performance Evaluation of zHyperLink Write for Db2 Active Logging for SAP Core Banking Batch Workload

ABSTRACT. As the customer base for a commercial bank grows so does the number of bank accounts and transactions that need to be processed each day. This increase in volume results in additional work for overnight account settlement batch processing. A bank needs to be able to manage this growth without exceeding their allotted overnight batch window time. One way Db2 for z/OS customers can accommodate this additional load is by utilizing zHyperLink for z/OS, a short distance, direct I/O connection between the mainframe and physical storage systems. It provides significant improvement in Db2 logging latency over high-performance FICON which can reduce both transactional latency and overnight batch processing time and also improve their batch processing throughput. This presentation will share the experiences and results of a performance study evaluating the benefits of zHyperLink Write on SAP Core Banking: Account Settlement, a more realistic banking batch workload. The presentation will cover the basics of zHyperLink Write, the necessary prerequisites to use zHyperLink Write, and analysis of the performance measurement results. After this session attendees should be able to understand and articulate the zHyperLink features and how it can contribute to improving batch workload throughput and batch processing elapsed time.

03:00-04:00 Session 19A
Location: Channel #A
03:00
Chaos Engineering

ABSTRACT. The resiliency of a critical business application was planned to be tested because of the following reasons. - There have been no outages for the application in the recent times - There was no clue about mean time to failure (MTTF) and mean time to recovery (MTTR) of the application

This paper explains about how an effective chaos testing can be done for an application to test its resiliency.

03:00-04:00 Session 19B
Location: Channel #B
03:00
The Performance Characteristics of Almost Everything

ABSTRACT. Starting with hardware and working through the tech stack, layer by layer, until we reach the user, this presentation looks at the important performance characteristics of nearly everything that makes up the IT landscape, drawing out the underlying performance themes in the process.

Why is this important? As the IT world becomes more complex, simple consumption of resources becomes more obscure, and yet the same themes still govern performance, sometimes in new ways and sometimes in exactly the same way as before. It is easy to forget the foundations of performance, or to forget understood performance principles when looking at new technology. We will take one last look at the nuts and bolts and apply their performance principles to cloud delivery and everything-as-a-service.

This presentation is a Learning Module in lecture format.

Andrew McDonald is an IBM Senior Managing Consultant with 32 years experience in IT, working in a range of fields including Data Centre Operations, Systems Management, Capacity Management, and Architecture. Andrew is Affiliate Leader for the IBM Performance and Availability Community and a member of the IBM Academy of Technology Leadership Team.

04:00-05:00 Session 20A
Location: Channel #A
04:00
Load Testing With Jmeter On Kubernetes and OpenShift

ABSTRACT. Following this methodology, performance, automation and regression testing can all be done in parallel to each other, and even a deployment can be done. CI/CD can be implemented here, which will help in executing the tests with minimal efforts.

04:00-05:00 Session 20B
Location: Channel #B
04:00
Openshift Container Platform Scalability and Performance
PRESENTER: Dinesh Kumar

ABSTRACT. The following document will provide Openshift Infra Scalability and Performance techniques to improve under various levels

1. Hardware level 2. Cluster level 3. Node level 4. POD level 5. Instance level

06:00-06:59 Session 22
Location: Both Channels
06:00
Building reliable systems on Microsoft Azure

ABSTRACT. Building reliable systems on Azure is a shared responsibility between Microsoft and the customer. Microsoft manages the datacenters and customers using architectural best practices need to make sure their solutions requirement are meet. Azure fit simple reliability to mission-critical complex requirements.

In this session we will discuss how to use the Azure Well-Architected Framework to build highly reliable and resilient applications.

(Preferable Date: Wednesday 20th October: 6am-7amUTC slot! )

07:00-08:00 Session 23A
Location: Channel #A
07:00
Reference architectures for OpenShift Container Platform

ABSTRACT. Deploying a container platform offers many options. So selecting the right ones for its need could be challenging. IT could be helpful to rely on Reference Architecture. These ones describes the good pattern that have been tested and validated on previous. Using them could simplify your design activities and could provide good foundation for your solution and its scalability / availability. This session will introduces references architectures for Openshift platform and for IBM Cloud Pak. It will give examples how they have been used on existing projects.

07:00-08:00 Session 23B
Location: Channel #B
07:00
Monitoring and Performance Assessments of Cloud on IBM Z

ABSTRACT. Performance assessments of any cloud deployment are important. These assessments reveal insights about the load-bearing of the cloud deployment, effectively determining the cost-performance ratio and planning future expansions/shrinkage. This talk will focus on gathering such information about the cloud deployment and present various metrics that can be extracted by end-user or the DevOps/CICD teams from each subsystem. For this purpose, we will present a regression patrol framework that can easily identify anomalies in performance of any cloud system in general, and IBM Z system in particular. In addition to identifying the impact of workloads deployed on the cloud systems, this framework is also augmented by visualization aids for easy analysis of performance metrics by the end-user of the DevOps/CICD teams.

08:00-09:00 Session 24A
Location: Channel #A
08:00
Obviously a Major Malfunction... Lessons 35 years after the Challenger Disaster

ABSTRACT. The Space Shuttle was the most advanced machine ever designed. It was a triumph and a marvel of the modern world.

And on January 1986, shuttle Challenger disintegrated seconds after launch.This session will discuss how and why the disaster occurred and what lessons modern DevOps and Site Reliability Engineers can learn.

The Challenger disaster was not only a failure of the technology, but a failure of the engineering and management culture in NASA. While engineers were aware of problems in the technology stack, there was no conception of the risks they actually posed to the spacecraft. Management had shifted the focus from “prove that it’s safe to launch” to “prove that it’s unsafe to stop the launch”.

This session will present the risk analysis (or lack thereof) of the Shuttle program and draw parallels to modern software development and SRE. In the end, launching a shuttle is implementing an extremely complex deployment to the cloud… and above it.

08:00-09:00 Session 24B
Location: Channel #B
08:00
Increasing OpenShift Availability with VMware and only Two Available Data Centers

ABSTRACT. The recommended high availability setup for the OpenShift based workload is to have it stretched across three data centers, or even to have three OpenShift clusters, each in its own data center.

But what shall you do, if you have only two data centers?

This sessions talks about the approach, how to increase availability of the OpenShift environment, deployed on the VMware infrastructure, stretched across two data centers.

09:00-10:00 Session 25A
Location: Channel #A
09:00
SRE: The Good, The Bad, and the Ouch.

ABSTRACT. SRE gives organisations more efficiency, more scalability, more collaboration, and more reliability. What's not to love? It turns out realising these objectives is sometimes easier said than done. This presentation shares experiences of real organisations implementing SRE, and some of the missteps and victories on the way. What happens if SRE is ops, as if it was done by ... ops? Can you ever have too many war rooms? Are your microservices actually decoupled, or are they distributed? And finally, how can trust be grown? The audience will learn what anti-patterns to watch out for and how to avoid them.

09:00-10:00 Session 25B
Location: Channel #B
09:00
Zero Trust in Hybrid Cloud: where to start

ABSTRACT. Learning objectives: How to structure zero trust based solutions in an hybrid cloud context using the IBM Zero Trust Governance Model. Expected outcomes: participants should be able to apply the presented approach to identify and classify zero trust based solutions in an hybrid cloud environment as starting point for a security design. Session type: Lecture Delivery Method: Lecture Abstract: Zero Trust is a set of principles. Hybrid Cloud is today the reality for most organisations and typically encompasses a mix of very different technologies (legacy data centres and multiple public cloud instances). So where to start with Zero Trust in Hybrid Cloud and how to define and identify these solutions in such a broad scope as the Hybrid Cloud. In this lecture I will provide a three layered structure to position zero trust based solutions: • Some of these solutions will address security amongst all environments (inter-environment layer), • Some are specific to one type or even just one environment only (intra-environment layer), • And some are specific to the workload and the Container clusters where these workloads are running (inter-workload layer). Each of these layers have specific risks which must be addressed. The lecture will include per layer an overview of possible security controls to address these risks. The IBM Zero Trust Governance Model will be used to structure and group the controls per security domain. Also, the controls will be related to the zero trust principles from the NIST 800-207 publication.

10:00-11:00 Session 26A
Location: Channel #A
10:00
Applying SRE principles in Microservices Architecture
PRESENTER: Nagaraj Chinni

ABSTRACT. Microservices architecture has become choice of application architecture in the Cloud world. Be it a Migration or Cloud Native, microservices fast gaining popularity for their obvious advantages. Architecture introduces plethora of complexities as the number of components to be managed are far higher in number compared to the traditional monolith architecture. Complexity introduced in the form of different technologies used, number of integrations, different databases, container and container orchestration platforms and other cloud services used in the architecture.

SREs must look into various aspects of complexity of designing, running and managing the microservices. In this presentation we will be focussing on the following to address the availability and reliability requirements upfront right from the architecture phase:

Design for reliability: This includes the suitable microservices architecture for reliability and also how to be prepared for testing the applications for reliability

Securing microservices (API and Image): Open Source adds additional dimension to the security aspects in developing microservices as the developers tend to use libraries and images from various repositories.

Build to manage : While building the microservices focus should be given to on how to manage it. With microservices, we will be proposing various techniques to implement the build to manage aspects like Health Check APIs, Distributed Tracing. DevSecOps strategy : We will be discussing on enhanced stages for DevSecOps like the performance testing and vulnerability checks in the DevSecOps.

Observability: Web of services makes it complex to monitor the applications reliability. Observability of microservices with a set of dashboards that will provide a clear view on the SLIs is the need of the hour.

Finally, the session will bring out some of the best practices/case studies implemented in client projects and discuss the value it brings.

10:00-11:00 Session 26B
Location: Channel #B
10:00
Performance Testing of Citrix Applications using HP LoadRunner
PRESENTER: Swetha Dinesh

ABSTRACT. The complexity of Citrix protocol with its latest security features has to be addressed through a detailed understanding of Citrix ICA protocol, how it works under the hood, simulation procedure, steps to address Replay issues, handle concurrent test execution issues, random/parameterized data handling mechanisms. This paper is to address such key issues and our learnings on how to handle such issues. HP LoadRunner is our tool of choice for Performance testing of Citrix applications through its sophisticated built-in support and capabilities for Citrix ICA protocol

11:00-12:00 Session 27A
Location: Channel #A
11:00
Secure Cloud Engineering for the Compliance with Industry Standards

ABSTRACT. This presentation will introduce an evolving practice in secure design of hybrid-multicloud solutions. It is meant to provide a detailed guidance for cloud solution SMEs on combining best practices in cloud solutioning and security into an effective and cost-efficient technique to enable secure cloud computing. Existing cloud solutioning practices are being extended to address emerging needs for securing complex multi-cloud environments. Since most of the customers who consider cloud transformation come from a specific industry, nowadays, it is not sufficient to just design cloud solutions securely. It is mandatory to demonstrate the solution design in compliance with common industry standards, such as PCI DSS, HIPAA, GPDR, and others, as a few examples. The secure cloud engineering method introduced as part of this presentation will explain how to approach the challenge discussed.

Learning objectives:

- Concept of secure cloud solutioning - ndustry Standards and corresponding controls - PCI DSS and HIPAA examples - Industry Standard focused cloud solutioning - best practices - Security Controls and means to fulfill them in the course of a design

Expected outcomes:

- Understanding of cloud security - Understanding of common Industry Standards and corresponding controls - Understanding of how to approach secure design of a cloud solution for a specific industry - Insurance Industry example

Session type:

Experience sharing Innovative point of view

Delivery Method:

lecture case study game or other

11:00-12:00 Session 27B
Location: Channel #B
11:00
Using GitOps to Deploy Cloud Paks - GitOps hands on
PRESENTER: Noel Colon

ABSTRACT. The deployment, versioning, and management of IBM Cloud Pak solutions and their OpenShift infrastructure can now be automated using OpenShift GitOps and an end to end pipeline. GitOps enables infrastructure as code, where the desired state of a cloud application environment can be defined and maintained from GitHub repos. All the components you need are built-in to OpenShift and the IBM Cloud Paks, demonstrating the tangible value of the platform. You will see the architecture, the code, and will be ready to try it out yourself after this session.

This is the technical implementation & demonstration of the "Two sides of the same coin" Keynote.

15:00-15:59 Session 29
Location: Both Channels
15:00
Zero Trust

ABSTRACT. Zero trust is an emerging approach to security which is particularly useful in the current threat environment. This talk will cover the basics of zero trust, and how it is being used in IBM.

16:00-17:00 Session 30A
Location: Channel #A
16:00
Assure SLOs Automatically

ABSTRACT. Has your organization invested in building stateless horizontally scalable services or are architecting towards truly scale-out containerized applications? If so, join us on October 19-21, 2021 at IBM’s annual follow-the-sun virtual event, PREVAIL 2021. This event is devoted to IT resilience, performance, security, quality testing and SRE.

During our session you will learn how to reap the rewards of your SLO investment and let SLOs drive your infrastructure. Use response-time SLOs to drive pod scaling and resourcing at every layer of your application stack. You will see a working example of how Turbonomic can continuously and automatically maintain the response-time SLOS you set using autonomous scaling.

16:00-17:00 Session 30B
Location: Channel #B
16:00
Memory Measurements Complexities and Considerations

ABSTRACT. Performance and scale engineering involves complexities many people are not aware of. Memory usage in particular is not always straight forward, involving unexpected characteristics that need to be accounted for. I doubled my workload, shouldn’t my memory usage double as well? Total memory usage is the right statistic to view? High memory “usage” is bad, right?

Much of the confusion centers around the system file caches which use remaining available memory to optimize disk IO. Knowing which memory metrics count this usage in their stat is therefore crucial to proper memory analysis.

In this session we will dive into the memory measurements and statistics at both the Linux system level, as well as Kubernetes pods and containers. Both have multiple memory metrics, some more important and crucial than others. Knowing they exist and understanding how they relate and interact is both interesting and important to our roles in optimizing and improving our resource consumption. In order to do this, we have some real-world examples of running systems that illustrate the interactions and behavior of the metrics as things change.

In a Kubernetes Cloud deployment, performance and scale engineers are constantly after how best to set memory requests and limits for each container. These requests and limits help Kubernetes schedule the containers and ensure proper sharing of resources happens. Building on the knowledge of the various memory statistics available, we can reach some conclusions on best practices and recommendations around setting the requests and limits.

Attendees will ultimately leave the learning module lecture with an awareness and better understanding of how to analyze and interact with memory usage in a Linux and Kubernetes environment.

https://community.ibm.com/community/user/aiops/blogs/riley-zimmerman/2021/07/01/memory-measurements-part-1

17:00-18:00 Session 31A
Location: Channel #A
17:00
The Cloud Engineering Paradigm

ABSTRACT. In large corporations we need to be able to integrate with legacy systems and established technologies to provide continuity to our customers. But one of the challenges we have is explaining how the cloud is a different engineering paradigm. A “paradigm” is the way of thinking that lies behind how we solve problems - the principles, the theory and the method. For those that have not experienced cloud directly, it can be difficult to understand what is so different. This talk will describe the Cloud Engineering Paradigm using 12 principles that I have found to be key differentiators, and helpful for opening minds to our ways of working.

17:00-18:00 Session 31B
Location: Channel #B
17:00
IBM Cloud Observability, key concepts and challenges
PRESENTER: Andrew Low

ABSTRACT. Learning objectives: Understand challenges of gathering logs, metrics and audit data in public cloud environments. Introduce considerations for highly regulated cloud environments. Expected outcomes: Attendees will have a basic understanding of Observability in public cloud. Session type:  Experience sharing Delivery Method:   lecture

Cloud has changed how software is delivered to production. Virtualization, Kubernetes (Containers), microservices, hosted aaS components, SRE, DevOps, IaC have allowed teams to create and deploy new capabilities in records time. These changes come with additional operational complexity that exceeds traditional solutions for logging and monitoring. A key capability of the IBM Cloud platform is the Observability suite, enabling customers to view the logs and metric data of their hosted applications. Domain experts Andrew Low and JP Parkin will discuss common use cases, and the considerations needed to be successful. Furthermore they will touch on the additional challenges of high security and regulated environments will be discussed, with a viewpoint of the key factors to consider.

18:00-18:58 Session 32
Location: Both Channels
18:00
Distinguished Technical Leaders Panel
PRESENTER: Rashik Parmar

ABSTRACT. Open discussion with technical leaders to discuss current and future directions as well as challenges

19:00-20:00 Session 33A
Location: Channel #A
19:00
Iter8: Release Engineering for Cloud Native Applications

ABSTRACT. Learning objectives:

Reliably releasing new versions of software to production is a key concern for DevOps and SRE teams, where the following basic questions need to be tackled. 1) Does the new version satisfy latency and error-related service-level objectives (SLOs)? 2) Will it maximize business value? 3) If it is release-worthy, how can it be safely rolled out to end-users? If not, how can it be safely rolled back? 

We present Iter8 (https://iter8.tools), an open-source AI-driven release engineering platform for Kubernetes-based applications. The core innovation in Iter8 is the notion of an experiment, that can be used for orchestrating the release. We will show how Iter8 experiments break up the release process into cleanly decoupled subproblems, namely, evaluating app versions using well-defined metrics-based criteria, determining the best version (winner) using statistically rigorous algorithms, progressively rolling out the winner to end-users, and promoting the winner at the end of the experiment as the (latest) stable version. We will also showcase the unparalleled flexibility offered by Iter8 in terms of how these subproblems are addressed, and how their solutions are mixed and matched.

Expected Outcomes:

The audience will walk away with a clear understanding of how to create Iter8-embedded CI/CD/GitOps pipelines, for SLO validation, A/B testing, and progressive rollout of Kubernetes apps, that best fit their organization's needs. They will also learn to integrate Iter8 with K8s eco-system tools such as Helm, ArgoCD (for GitOps), GitHub actions (for CI/CD), Istio, and KFServing (for ML model serving).

Session type: Learning module

Delivery method: Participative lecture with live demos.

Example customer situations: IBM Cloud Code Engine, DevOps ToolChains, API connect, Kiali, KFServing, Istio, KNative, and Seldon; companies like Bloomberg, Trip Advisor, and Seldon adopting Iter8.

Materials for this presentation will be sourced from https://iter8.tools

19:00-20:00 Session 33B
Location: Channel #B
19:00
Performance Engineering and Sustainable Software Development

ABSTRACT. The sustainable software development and operation is an expected response of the ITC (IT and Communications) industry to the modern challenges to the world economy and the environment. The sustainable development and operation, as the term itself states it, addresses efficiency of both software production process and the IT product itself. Implications of the former include shorter production cycles and saved resources required to manufacture the IT product. Efficiency of the latter means long-term savings of energy and resources at the operations stage, which affects environmental, economic, and social aspects of the society. Quality Engineering, and Performance Engineering in particular, is the right tool to address the challenges of the sustainable software development and exploitation. As an example, the exploratory performance testing, XPT, focusses on approaches which directly contribute to a streamlined SW development process. Also, the focus of the QE/PE on quality attributes makes it an ideal tool to target the product’s sustainability. This presentation provides a brief summary of the sustainable software development and illustrates how methods of the Performance Engineering and XPT naturally address the key challenges of the sustainable product development and operation. Apparently, the Sustainability itself now should be considered as a key Quality Attribute of any IT solution.

20:00-21:00 Session 34A
Location: Channel #A
20:00
"Real Attacks, why you should care" and "Ransomware 101"
PRESENTER: Dustin Heywood

ABSTRACT. First: Title: Real Attacks, why you should care Author: Evil Mog (Dustin Heywood), Hacker Ops Lead, IBM X-Force Red Bio: Presented at multiple security conferences including DerbyCon, BsidesLV, PasswordsCon, NolaCon, CypherCon, THINK, and more… Member of Team Hashcat Abstract: This talk will cover multiple methods in which attackers will take over an active directory domain in the real world. These attacks do not leverage traditional vulnerabilities and instead use protocol weaknesses which are very hard to detect. This talk will provide an overview of this family of techniques, and how fundamental security design and hygiene prevent these attacks. Attacks discussed include PrivExchange, Print Spooler (Auth Reflection, Remote Code Execution, and Security Downgrades), as well as newer attack types such as PetitePotam, Certificate Services Abuse, and Zero Logon.

Second: Title: Ransomware 101 Author: Troy Fisher, Ethical Hacker, IBM Security Bio: With over 20 years of experience Troy is no stranger to security. In addition to regular duties as an Ethical Hacker in IBM Security, Troy focuses on containerization, threat modeling and teaching security concepts to developers, business people and aspiring penetration testers. Abstract: Ransomware is all over the news, but the coverage is often light on details. Who is behind these attacks? Who are they targeting? How does Ransomware work? What can we do to protect ourselves? All of these questions (and more) will be answered.

20:00-21:00 Session 34B
Location: Channel #B
20:00
Architectural patterns for building multi-cloud micro-services

ABSTRACT. Microservices provide the architecture paradigm to develop highly scalable, loosely-coupled cloud-native applications, which can be quickly deployed on any cloud. The evolution of multi-cloud provides more opportunity for microservice applications to increase the availability and presence by distributing these services across clouds. This is important because in the future cloud will be truly distributed, spanning multiple vendors and infrastructure locations (public clouds, on-premises, edge locations, and so forth). Hence, the idea of cloud-native will also require that the microservices can run across different clouds and be adaptable.

In this presentation, we will describe the new multicloud microservices architecture patterns, how to build these services, and how to connect them with the service mesh. We will also discuss trade-offs around security, availability, and performance when deploying multicloud microservices.

23:00-23:59 Session 36
Location: Both Channels
23:00
Determinism

ABSTRACT. Determinism

Uncontrolled and unintended nondeterminism has been a persistent problem for concurrent, parallel, and distributed software. Recent trends have improved the situation by replacing threads and remote procedure calls with publish-and-subscribe busses, actors, and service-oriented architectures, but even these admit nondeterminism and make building deterministic programs difficult. One approach is to abandon determinism, recognizing that software has to handle unpredictable events, communication networks with varying reliability and latencies, unpredictable execution times, and hardware and software failures. In this talk, I will argue to the contrary, that determinism becomes even more valuable in unpredictable environments. Among its many benefits, determinism enables systematic testing, shifts complexity from application logic to infrastructure, enables fault detection, facilitates composability, and more. The key is to understand that determinism is a property of models, not of physical realizations. In engineering, our primary goal is to coerce the physical world to match our models. In contrast, in science, the primary goal is to coerce the models to match the physical world. In this talk, I will examine what we mean by "determinism" in engineering, science, and a bit in philosophy. Whether a model is deterministic or not depends on how one defines the inputs and behavior of the model. I will conclude by outlining a practical deterministic model well suited for concurrent, parallel, and distributed software. I will describe a realization of this model in a coordination language called Lingua Franca.

Short biography:

Edward A. Lee has been working on embedded software systems for 40 years. After studying and working at Yale, MIT, and Bell Labs, he landed at Berkeley, where he is now Professor of the Graduate School in EECS. His research is focused on cyber-physical systems. He is the lead author of the open-source software system Ptolemy II, author of textbooks on embedded systems and digital communications, and has recently been writing books on philosophical and social implications of technology. His current research is focused on a polyglot coordination language for distributed real-time systems called Lingua Franca that combines features of discrete-event modeling, synchronous languages, and actors.

Pictures:

Pictures may be found here: https://ptolemy.berkeley.edu/~eal/biog.html