PREVAIL 2020: IBM CONFERENCE ON PERFORMANCE ENGINEERING, AVAILABILITY AND SECURITY
PROGRAM FOR WEDNESDAY, SEPTEMBER 16TH
Days:
previous day
next day
all days

View: session overviewtalk overview

03:00-04:00 Session 14A
03:00
Samvedna Jha (Systems, India)
Threat modelling Process in Software Development

ABSTRACT. Threat modelling is not a new process to industry. But as other processes this also needs reminder and revisit by many of us. With Agile becoming an integral part process of software development the focus has shifted to continuous delivery. Threat modelling process helps to pause and look into corner cases of the product. Asset identification, trust level of users and overall communication in and out of product is the need of the hour. Threat model comes as a rescue to identify the vulnerabilities of product. It helps to seed security thoughts during the design of product. Doing product vulnerability analysis in such early stage of development helps us achieve security in design. The proposed session will share sample threat models from legacy software and cloud application. How in different product type we need to think about how to create threat model and keep it up-to-date for future releases.

03:00-04:00 Session 14B
03:00
Nithin S N (IBM, India)
Prathyusha Vankayala (IBM, India)
Benefits of Using AI-ML in Performance testing
PRESENTER: Nithin S N

ABSTRACT. The Performance Testing starts with analysing the application UI and creating the test scripts. Post that users hit the application server and generate beautiful Results from Load testing tools indicating the Response time, Throughput, CPU utilization time, memory utilization etc. In the era of Artificial Intelligence (AI) and Machine Learning (ML) powered softwares, during the early stages of application design, performance engineers should be able to answer questions like: What should we expect once the application is in production? Where are the potential bottlenecks? How to tune application parameters to maximize performance? Critical applications need a mature approach to Performance testing and monitoring. AI is the intelligent part of Performance Testing process. It acts as brain in the process. Daily Tasks like test design, Scripting and implementation can be handled using AI, so that test engineers can focus on creative side of software testing.

Performance Test Modelling Processes: AI's pattern recognition strength can extract relevant patterns while load testing which is very useful for modelling performance process. The PT model consists of the algorithms being used, from which AI learns from the given data. The ability of AI to anticipate future load problems helps in creating Performance test model efficiently. It deals with lot of data and can predict the system failures. Once the system data is analysed, Performance test model can be created based on the system behaviour.

SLA design: SLAs should be SMART (Simple, Measurable, Attainable, Realistic and Time bound), but most SLA are not designed like this. This is the basic limitation of human powered systems. However, once AI takes the role, the situation will be change. It can track all the affecting areas and gets reinforced into monitoring system with providing granularity. It can analyse the complexity of the system and suggest the appropriate SLA Monitoring: Tools like Dynatrace, AppDynamics introduced AI into their system which are helping in identifying the bottlenecks in multiple tiers of applications in early stages of software development. It can analyse the application and can predict the performance defects at the code level.

Role of AI in every phase of performance testing and engineering is proved very beneficial and is future of performance testing. Use of AI in performance testing will make tasks like scripting, monitoring highly impactful and help to get real time results very quickly. I believe, in future role of AI in performance testing will be a game changer!

04:00-05:00 Session 15A
04:00
Rima Bose (IBM, India)
Security of the Internet of Things

ABSTRACT. Learning Objectives:

It is predicted that by 2025, there will be an estimated 75 billion connected devices globally. With uber fast response times by 5G, internet of things (IoT) will grow tremendously in next five years. But security experts are wary that IoT will increase attack surface, bringing new types of cyber vulnerabilities into our lives.

There is also a rising trend of hackers targeting critical infrastructures like power grids, chemical plants and transportation systems. Most Industrial Control Systems (ICS) weren’t designed to be connected to the internet, so they lack security-by-design. A power plant was hacked when multiple devices were taken over by cyber criminals to cause power failure in a nation state.

The objective of this session is to learn few mitigation techniques: i. Data encryption ii. Network segmentation iii. Security-by-Design iv. Patch updates/centralised patching system v. Changing factory settings/default password change/user manual vi. Secure networks vii. AI-based monitoring and analytics tools viii. DNS security or DNSSEC to prevent another Mirai botnet like attack ix. Privacy policies and ongoing software support

Expected Outcomes:

1. To be able to understand IoT vulnerabilities, create security strategy and roadmap, standardize policies and evaluate third party interactions and exposures. 2. Several industrial bodies, governmental agencies and regulatory bodies have attempted to standardize IoT security. We will discuss these regulations in brief. 3. Overview of new technologies that are being developed to immunize devices against malicious behaviour. 4. Strengthening Cybersecurity of IoT with Blockchain: Blockchain provides a safe infrastructure for the transfer of data from one device to another without the interference of malicious actors. The decentralized control of blockchain enables IoT devices to create audit trails and tracking methods for registering and using products.

Session Type: Learning module Delivery Method: Lecture, Case Study Tags: #Security #IoT #CriticalInfrastructure

06:00-07:00 Session 17A
06:00
Julius Wahidin (IBM, Australia)
A demo on building a topology and see the topology changes over time using Agile Service Manager

ABSTRACT. The session will present the building block of topology through presentation and a live demo on using the Agile Service Manager. The audience will then learn that topology can be data-driven, build automatically, and topology changes over time. By showing the difference in topology, the demo will show how an SRE can use the topology to view and detect the potential cause of the availability issue. The demo will end by showing a sample of more complex topologies, so the audience can see that using the building block that they have learned, they can address a real-world problem.

06:00-07:00 Session 17B: Sabitri Chakraborty: Security planning to transform business operations on Cloud - Postpandemic
06:00
Sabitri Chakraborty (IBM India Pvt Ltd, India)
Security planning to transform business operations on Cloud - Postpandemic

ABSTRACT. Learning objectives:

The objective of the session would be to provide audience a panoramic view on what is the risk landscape that businesses need to consider while they are planning to quickly migrate whole / part of their operations from on-premise / managed - services model to Cloud platforms in the #post-Covid situation. While the businesses are striving to achieve resiliency and security in running their post pandemic business ops by transforming to Cloud -enabled business, what level of security planning and risk profiling they need to conduct before engaging with a CSP and what needs to be agreed upon in the Contract.

The session will speak in depth about three different areas:

(1) Assessing the risks and putting a pragmatic plan in place: This would talk about conducting a feasibility and due diligence study of what part of data and operations can be shifted to Cloud and gauging the risk landscape during and post migration. This would address understanding the geographic and regulatory requirements into factor. An implementable and measure plan with identified risks as well as mitigation measures would be discussed here.

(2) Knowing what you are responsible and accountable for: While the infrastructure, databases, storage and middleware or even applications can be leveraged or rented from the CSP, the accountability still remains with the business. This can be addressed by clearly identifying the security & risk management roles & responsibilities shared by the CSP and the businesses (Cloud consumer). We would discuss the "Shared Responsibility Model" here.

(3) Transparent and agreed upon contractual clauses: Finally this section discusses about articulating security SLA's which talks about agreed upon governance and reporting. This includes areas like Logging and monitoring, Privileged access management, Incident Reporting, Regulatory reporting, Right to audit, Backup and availability and so on.

Expected outcomes (what will the student be enabled to do?): This would enable the students to understand how they need to advise Clients on conducting post pandemic Due Diligence assessment and provide recommendations on forming an effective and risk centric Contract with CSP

Session type: Experience sharing

Delivery Method: story-telling (presentation)

07:00-08:00 Session 18A
07:00
Rethiking Enterprise Backup and DR for the Cloud

ABSTRACT. In the light of adoption of digital business and a shift towards cloud based solutions and services, huge investments are being made by many organizations towards cutting-edge technologies in order to keep up their competitors and improve business operations. As the data and infrastructure move out of organizational and geo boundaries, complexity increases, attack vector increases and compliance, governance pressure increases too.

With the advent of cloud native applications with containers, continuous integration and multi availability zones, there is a thought process, that there is no need for Backup and DR. Most Cloud Service Providers (CSP) offer SLAs, security, and features sufficient to satisfy enterprise business uptime requirements.

This session sets out to identify and address the still lingering business uptime requirements are at risk and requires rethinking of Backup and DR of applications in the hybrid multi-cloud environment.

Backup efficiency has highly evolved leading to a series of multiple point-in-time copies and quick restore from incremental snapshot. Distributed cloud workloads tend to have data spanning multiple VMs with few dozen or more variations of network and storage configurations. For protection against cyber threats, enterprises require integrated capability to identify intrusion, quarantine servers & data, recover from immutable validated good point-in-time copy of data.

In the native cloud environment, virtualization, cloud automation and container based modernization has greatly simplified server and infrastructure recovery. Yet, for persistent data, technological limitations to provide data recovery across geographic regions continue and solutions to address complexity of data access, efficiency, consistency and recovery SLAs are in evolutionary stage.

This session details the enterprise need for rethinking the availability & continuity. To address these challenges in this digitized and cloud era, this session discusses solution required for handling spectrum of applications spanning partial to fully modernized, across on premise and cross cloud distributed services.

07:00-08:00 Session 18B
07:00
Andrew Roden (CSI&A UKI, UK)
Guy Williamson (Global Cloud CoC, UK)
Does Red Hat OpenShift enable a new paradigm in Performance and Availability Engineering?
PRESENTER: Guy Williamson

ABSTRACT. Red Hat OpenShift is a container platform that is supported by all the major Cloud Providers (AWS, Azure, GCP, IBM) and is also available to install on-premises. Instances can easily be created through common automation tooling such as Ansible or Terraform which leads to some interesting possibilities which may change the way we consider performance and availability engineering in the future. Areas for discussion to include, • With the possibility of OpenShift clusters stretching across “Availability Zones” from different Cloud Providers, what are the impacts on performance and availability? • Is there a need for Cloud specific architectures (including the master configuration) to optimise performance of underlying infrastructure on different providers? • Cloud Brokerage to shuffle processing to the provider with the cheapest option for any given time/day. • Do Disaster Recovery concerns trend towards zero, or do we still need to consider DR from a data perspective? • How do we address Operational and Management considerations for monitoring performance and triggering the Business Decision to change hosting or architectures?

09:00-10:00 Session 20A

Day 2

09:00
Robert Barron (IBM, Israel)
Failure is not an option!

ABSTRACT. 2020 marks the 50th anniversary of the Apollo 13 mission - an explosion nearly destroyed the spacecraft when it was halfway to the Moon. Only the incredible efforts by the astronauts in space and the engineers on the ground ensured the safe recovery of the astronauts.

In this session you will learn about the way NASA prepared for this mission, the way it performed during the emergency and the way it learned and improved for future missions.

Lessons will cover the domains of Change Management, Problem Management, Chaos Engineering and more.

All these lessons are still relevant for the way we approach resiliency and keeping services available 24x7 in 2020.

The story of Apollo 13 is an inspiring one in general and for IBMers in particular, due to the central role of IBM hardware, software and people in the Apollo space program.

Note - this session continues and extends the Prevail 2019 session on lessons from the lunar landing, it is not a repeat.

09:00-10:00 Session 20B
09:00
Stefaan Van Daele (IBM, Belgium)
Applying zero trust in an hybrid cloud environment: one principle at dual speed

ABSTRACT. Most organizations have their workloads (data and applications) distributed between an on premise infrastructure and different variants of cloud computing (IaaS, Paas, SaaS). Now when an organization would like to apply the zero trust principles in an holistic way they are confronted with different capabilities and different constraints in both environments. In this session I would like to highlight both the opportunities as well the challenges that organizations are faced with when they start to apply the zero trust principles as part of their overall security governance. Cloud providers have in general the zero trust principles already built in but where to start in the data center and how to cope with security at dual speed?

10:00-11:00 Session 21A
10:00
Haytham Elkhoja (IBM, UAE)
Always On and Resilient workloads with Chaos Engineering

ABSTRACT. We discuss and show how by using Chaos Engineering we can sustain five nines (99.999%) Service Level Objectives across multi-regions and multi-clouds.

I present architectural methods, patterns and practices that are to be followed by developers, SREs and software architects when building and maintaining cloud-native applications and services that need to provide the highest levels of availability. The methods describe how to provide practical five nines (99.999%) for end to end business services by incorporating Site Reliability Engineering (SRE), DevOps, Microservices, Chaos Engineering, Cloud-native Architectures, Application Modernization, Multi-Availability Regions, Geo-dispersity, Data Consistency, Performance and Scalability, Content Delivery Networks (CDN), and Software-defined Environments (SDE).

10:00-11:00 Session 21B
10:00
Jerome Tarte (IBM, France)
Security best practices for a container platform

ABSTRACT. Security is a key requirement in IT. If Cloud container platform changes the way to develop, deploy and operate applications, the need of security is still valid. The architecture that are deployed should be compliant with security rules and compliance requirements. During the session, we will visit what are the best practices to secure a cloud container platform, based on IBM Cloud Paks and Openshift. The advices and best practices are based on lessons learned from real client projects. By following these best practices, your container platform will become more secure.

11:00-12:00 Session 22: Keynote
11:00
Stacy Joines (IBM, United States)
Keynote: Performance Engineering: The IBM Garage Perspective

ABSTRACT. Keynote

12:00-13:00 Session 23A
12:00
André Fachat (IBM, Germany)
Transactions in the cloud - achieving consistency with unreliable infrastructure

ABSTRACT. Modern programming paradigms aim to reduce dependencies between components as much as possible. This goes as far as only allowing "lightweight" protocols like HTTP(S) between (Micro-) Services in a cloud infrastructure. The assumption is that any kind of service can be unavailble or be replaced at any time, e.g. through network issues or even just the deployment of a new version. So, the shared state between services should be minimized.

In more complex business applications, consistency between components is of high value. You don't want to just loose your money in a bank transfer between accounts, because there was a network glitch. However, as distributed transactions cannot be easily done with HTTP, the consistency between Services must be managed in the application layer.

In the presentation I will give an overview on current approaches to handle such consistency requirements, as well as present a lightweight, cloud-native design and protocol called "txms", as developed by the IBM Academy of Technology's workground on transactions with Microservices.

12:00-13:00 Session 23B
12:00
Jonathan Dunne (IBM, Ireland)
Lisa Cassidy (IBM, Ireland)
Sonya Leech (IBM, Ireland)
Usage Data Modelling: A Robust Capacity Planning Framework
PRESENTER: Lisa Cassidy

ABSTRACT. Capacity planning methodologies are a useful way to plan for resource usage. As DevOps and Development teams adopt, such techniques, there is a need to provision hardware resources in a precise and repeatable way. A lack of precision may lead to an overspend for a client or worse an under-provisioned system that may perform poorly. In this study, we propose a framework to predict system resource usage (CPU, Disk and Memory) with a high degree of precision (between 0.68 and 0.86). Using two enterprise datasets, we demonstrate that an ensemble set of application features; can be used to forecast capacity planning outcomes. Our framework can help DevOps, Offering management and Sales teams along to plan for existing and new customers with a higher degree of precision.

13:00-14:00 Session 24A
13:00
Raghuram Srinivasan (IBM, United States)
Sampath Swaminathan (IBM, United States)
Crawl-walk-run – Our approach to reliable systems with SRE and Automation

ABSTRACT. In this session we will showcase IBM services method to leveraging automation by Site Reliability Engineers, to transform IT Operations. With our crawl-walk-run approach to implementations, we will demonstrate how to use and leverage the SRE tenets, in a hybrid cloud environment, to build and manage reliable systems.

The learning objectives for this session would be to:

• Understand the differences between a highly available system vs reliable systems • Understand the differences between reliable systems and resilient systems • Tenets of Site Reliability Engineering at a high level • Focus on “Toil reduction through automation” tenet.

As referenced in the SRE book – “Toil - is the kind of work tied to running a production service that tends to be manual, repetitive, automatable, tactical, devoid of enduring value, and that scales linearly as a service grows.”

We will be sharing our real-world experiences in implementation of toil reduction through automation at numerous IBM Services accounts. We will provide the details that we used to enable account operations teams, to leverage existing or new deployments of shared RedHat Ansible Tower with a community development model framework. We will discuss the lessons learned in accelerating the adoption of such automations, but more critically, we will share how we are transforming the mind set of operational resources to think like SRE’s.

13:00-14:00 Session 24B
13:00
Anik Manik (IBM India Pvt. Ltd., Indonesia)
Enhancing Performance Predictions using Machine Learning

ABSTRACT. The performance of an application is determined by the response time of each transaction. To measure this transaction response time percentile Load tests combined with performance modeling is widely used to support such SLA management exercises. Besides load testing Regression analysis is also used to predict response time percentile. These existing techniques are very time consuming because load testing requires scripting, environment setup and test monitoring and Regression analysis requires manual construction of complex analytic or simulation models.

As a solution of this problem Regression Analysis using Machine Learning technique is proposed to make predictions more accurate with minimal manual effort. This Regression analysis relies on historical response time data collected from a historical repository of previous tests. Using supervised and unsupervised machine learning algorithm it will first understand the structure of the data and will find out a relationship between them. This machine learning algorithm will improve itself with further analysis. The more we train the model we will obtain more accurate result.

13:30
Vinay Nama (IBM India Pvt LTD, India)
Creating a scalable and reusable JMeter and ELK setup on OpenShift cloud

ABSTRACT. The Jmeter run the script in a non-GUI mode and write results into an XML file without the need for configuring additional listeners in the test plan or changing the property files. Custom parameters are added for build and test version so they can be passed directly as the command line arguments. Filebeats are run on systems on which target log files are being written. The log files are monitored by Filebeats and their content is shipped to Logstash. The hardware footprint for Filebeat is minimal due to it being a lightweight service. In Performance Testing scenario, Filebeat is installed on the same environment as JMeter. Resulting performance test output files are monitored and sent to Logstash server instance for further processing. This results in near-real-time data visibility in Elasticsearch and Kibana. The Kibana is customized to produce different charts and comparison of results from previous release, it is also customized to include DCRUM response time.

Outcome of the above implementation: The run is triggered through command prompt or through CI/CD and the results are loaded to ELK through FileBeat and can view the results near realtime and monitor the run and get the results in bar graphs, the results can be drilled down to each transaction and compare at transaction level. This can be accessed by anyone from anywhere providing transparency and insights to client.

16:00-17:00 Session 25: Keynote: Chris Dotson. Keynote: Resilience through security on the cloud
16:00
Chris Dotson (IBM, United States)
Keynote: Resilience through security on the cloud

ABSTRACT. How does security affect resilience?   For example, are your cloud security measures implemented in a way that can respond to changing demand quickly, and do your security measures make your environment more resistant to disruption from different types of attacks?  In this session, we'll discuss the essentials of cloud security -- such as threats, vulnerabilities, risks, and preventive and detective controls, and how different security measures can both positively and negatively impact resilience.

 

17:00-18:00 Session 26A
17:00
Alex Zverev (IBM, Canada)
Exploratory Performance Testing and Quality by Design

ABSTRACT. Recent progress in the IT technologies requires adequate changes in Quality Engineering. QE must fit into the concept of Quality by Design, enable efficiency of SRE, support quality and agility in DevOps implementations to maximise value output while minimising the resources and turnaround times. Currently accepted methodologies of performance testing and capacity analysis have limited benefit/cost ratio and need to be enhanced to fit the requirements of the day. Exploratory performance testing (XPT) provides flexible methodologies to maximise the value while controlling resource investment and maintaining shortened development cycles. The approach utilises dynamic reproduction of wide-range load conditions based on key business flows and production data.

Unlike the traditional fixed-load performance simulation targeting system responsiveness at peak-load conditions, the XPT explores all spectrum of load magnitudes, providing deeper insight in the limits of the system capacity and performance, thus allowing more conscientious approach to the continuous testing, full-range visibility into the quality attributes for DevOps and Golden Signals for SRE. Enriched information provided by XPT improves feedback into the Development stages and facilitates the Shift Left ideology. The full-range load analysis and exploration of the extreme conditions, similarly to chaos engineering, allows to ensure robustness of the solution in question. XPT delivers richer information about the system with the same, or sorter preparation time. This makes the approach a good candidate for performance validation in an Agile setup.

Leaner XPT approach with more comprehensive results’ output, including all the non-automated channels of the performance metric acquisition, allows to acquire the data for early performance analysis and tuning, thus facilitating the Quality by Design paradigm.

17:00-18:00 Session 26B: Kubernetes resilience is not enough: the journey to make a simple application resilient
17:00
Eduardo Patrocinio (IBM, United States)
Kubernetes resilience is not enough: the journey to make a simple application resilient

ABSTRACT. So you think that if you containerize your application and run in a Kubernetes environment, it will provide the resilience is expect. You are wrong...

This session will present the steps required to take a simple Kubernetes application and redesign it to be more resilient to disruptions.

We will talk about - how to loosely couple the components - the need to understand how open-source components work; and - how to choose components that will provide the required resilience

The audience will learn many insights on how to make applications resilient.

18:00-19:00 Session 27A
18:00
Wesley Stevens (IBM, United States)
Robert Barron (IBM, Israel)
Rod Anami (IBM, Brazil)
IBM Services ChatOps: Automation, Process, & Collaboration
PRESENTER: Wesley Stevens

ABSTRACT. ChatOps is a collaboration model that connects people, processes, tools, and automation together in a seamless and transparent workplace through a chat platform and extensive use of integration bots and chatbots. In late 2019, IBM Services set out to revolutionize the way we work! We leverage Slack and automation to improve how we solve our client’s problems. By using a flexible architecture called ChatOps Knight, we are enabling hundreds of GTS accounts to benefit from a central ChatOps solution and dedicated integrations while allowing them the versatility to customize and develop their bots too. In this session, we will describe the problem we’re trying to solve and the solutions - both process-oriented and technological.

This session will be done in an interview format among the submitters.

18:00-19:00 Session 27B
18:00
Surya Duggirala (IBM Cloud, United States)
Yanni Zhang (IBM Cloud, United States)
Methods and Techniques for Securing Enterprise Applications deployed on IBM Cloud Paks
PRESENTER: Surya Duggirala

ABSTRACT. Enterprises can seamlessly move their applications between multiple clouds using IBM Cloud Paks. As these applications may have separate security profiles and needs based on where they are deployed, it is essential to understand how security is handled in IBM Cloud Paks. This session will discuss security mechanisms supported by IBM Cloud Paks to harden cloud platforms and applications like network security, Identity and access management, application security, end point security etc., This session also discusses how to characterize security performance and how to reduce the overhead balancing both security and performance needs of applications

20:00-21:00 Session 29A
20:00
Haytham Elnahas (IBM, Egypt)
Eduardo Patrocinio (IBM, United States)
Amine Anouja (IBM, UAE)
Breaking OpenShift: Best practices for managing users and projects in an OpenShift cluster

ABSTRACT. In this session, we'll discuss possible ways for a malicious user to cause performance or DOS-like issues on an Openshift cluster, we'll also list best practices for administrator to limit users privileges to perform such disruptive actions, even unintentionally. These best practices will consider: - Quotas - Security Context Constraints (SCC) - RBACs - HostPath access, - Networking, etc.

20:00-21:00 Session 29B
20:00
Alan Piciacchio (IBM, United States)
Ingo Averdunk (IBM, Germany)
Larisa Shwartz (IBM, United States)
David Leigh (IBM, United States)
Chris Tobey (IBM, United States)
Post Incident Learning for SREs - Best Practice Principles Including Ecosystem Assessments and Prescriptive Actions (LECTURE)
PRESENTER: Alan Piciacchio

ABSTRACT. In the IT world, system outages are inevitable. For clients, depending upon the impact, there are two "moments of truth" - the responsiveness to the outage, then the post-incident findings. Often, the latter piece (also known as the "post mortem" or RCA for Root Cause Analysis) is done with low quality and causes additional client frustration and mistrust.

In an initiative backed by IBM's Academy of Technology, a team of IBM experts has been assembled to define the best practice principles in the field of post-incident reviews, with an emphasis on learning so that systems and business practices would emerge more resilient. This innovative body of work will be presented, outlining these key principles. These include ecosystem culture, skills, techniques, and tooling that is available to facilitate higher-quality investigations. Also, the presentation will include specific client experiences and success stories.

As part of the presentation, an assessment methodology that determines where a team is on the spectrum of post-incident learning, along with prescribed actions to improve their standing. Along with this - the team will include testimonials and stories from IBM and client SMEs who have lived through case studies in the field of post-incident analytics.

The student, by virtue of attending this session, will become better-prepared to handle post-incident situations and should be able to influence the culture of their organization in a significant way. A hallmark skill of an SRE is the ability to drive post-incident learning into the fabric of an organization, thus the topic of effective post-incident best practices is vital.

(NOTE: this abstract is being submitted as both a LECTURE and a POSTER and the team prefers to have both requests accepted if possible so that this material can be shared very broadly across the enterprise).

21:00-22:00 Session 30A
21:00
Michael Mitchell (IBM Corporation, United States)
Multi-Cloud: Architecture & Design

ABSTRACT. The expectation is to deliver a lecture on the advantages of multi-cloud environments versus hybrid cloud architectures. The lecture will briefly speak of a comparison of the two and then, discuss the key advantages of multi-cloud sites. Multi-cloud environments can see a reduction in service breakdowns, as well as, reduction of pricing risks. I'd also want to speak to multi-cloud and the advantages of application/workload redundancy. I would follow up the lecture by touching on microservices and cloud arbitrage. I am expecting the audience to become (if not already) a bit more familiar with the advantages of multi-cloud environments and what impact(s) it can have in your environment. Session type: Innovative point of view Delivery Method: Lecture

Bio: - Michael Mitchell - Cloud Migration & Modernization SME - Biography (15+ years experience within the IT apparatus which traverses server build, networking, steady state operations, and server migration activities. As a member of the Cloud Migration team, his responsibilities include the execution of migration events and ensuring the technical cadence during the migration life cycle.)

21:00-22:00 Session 30B
21:00
Hollis Chui (IBM - DevOps Garage Solution Engineering, Canada)
Andrea Crawford (IBM - DevOps Garage Solution Engineering, United States)
Chris Lazzaro (IBM - DevOps Garage Solution Engineering, United States)
DevSecOps - Approaching devops through the lens of security

ABSTRACT. Discussion on DevSecOps, what it means and some of the best practices in the modern Cloud Native world.