PREVAIL 2020: IBM CONFERENCE ON PERFORMANCE ENGINEERING, AVAILABILITY AND SECURITY
PROGRAM FOR TUESDAY, SEPTEMBER 15TH
Days:
next day
all days

View: session overviewtalk overview

08:00-09:00 Session 1: Keynote: Architecting for Reliability

Keynote: Resiliency and Business Update Panel Discussion

08:00
Ingo Averdunk (IBM, Germany)
Architecting for Reliability

ABSTRACT. In a world where businesses provide services across the globe, the demand for availability is ever increasing. This is even more true for the expectation on availability and performance from a user perspective - services need to be always-on. Reliability describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability cannot simply be delegated to the infrastructure and/or platform a service is running on. Every tier needs to provide their contribution towards the reliability of the entire system. Also, reliability cannot be added after the fact on top of a built system, the system and its components need to be designed and implemented with reliability in mind. It is a shared responsibility by everyone contributing to the Software Development Lifecycle, including the Architect, the Product Owner, the conscientious DevOps Engineer, and the empowered SRE.

This presentation describes key architectural patterns available to implement reliability into a software component or service. With the skill and experience in these techniques, the engineering professional can have a meaningful conversation with product owners on implementing reliability targets for a given service.

09:00-10:00 Session 2A

Day 1

09:00
Lydia Duijvestijn (IBM, Netherlands)
Andrew Roden (IBM, UK)
Conducting a resilience and scalability assessment I - method, tools and outcomes
PRESENTER: Andrew Roden

ABSTRACT. In this session we will introduce an intuitive, easy-to-learn and easy-to-use method to conduct a resilience and scalability assessment. The method is based upon IBM's Resilience & Performance Engineering and Management Method. The interactive nature of the workshop-based method enables fast information collection and can be extended as desired with deep dives (application profiling; resilience or performance testing; source code analysis). The method is supported by two RADAR tools that visualize the gaps that need to be closed. The NFR RADAR provides an overview of the current and desired capabilities of the solution as a whole to meet the NFRs; the Solution Scalability RADAR highlights the current and desired scalability at each layer of the solution stack. Templates to document observations and recommendations both in presentation and in document form are available. Case studies in which this method have been successfully used will be briefly discussed.

Bio Lydia Duijvestijn Ms. Duijvestijn is an executive IT Architect and Performance Engineer within IBM GBS BeNeLux. She is a member of the IBM Academy of Technology leadership team and c-leads the worldwide community of practice for performance and capacity. She led a large number of engagements with customers in the area of design for performance, performance testing and performance troubleshooting. She has been teaching the IBM Universal Method Framework classes for operational modelling (infrastructure design) and architecting for performance for over a decade, both in the region BeNeLux and abroad and was a speaker at several IBM internal and external conferences about subjects related to IT architecture and performance.

Bio Andrew Roden Andrew is an experienced Architect in the Complex Solutions Integration and Architecture practice with GBS UKI who specialises in Resilience and Performance Architecture and is part of the Leadership team for the Worldwide Performance and Availability Community of Practice as well as the Co-Lead for the 2020 STEM Technology Council. Andrew has a broad range of experience across Software, Infrastructure and Cloud as well as verticals including Communications, Automotive, Retail, Public Sector, Industrial and IoT and Financial Services.

09:00-10:00 Session 2B
09:00
Ken Ueno (IBM, Japan)
Day 2 Operations for Red Hat OpenShift Container Platform and Kubernetes

ABSTRACT. Since the middle of 2019, Red Hat OpenShift Container Platform has become the "platform" of IBM Cloud Paks. Since then, with all IBM Cloud Pak environments, DevOps Engineers and SREs (Site Reliability Engineers) have to work with OpenShift either directly or indirectly. The purpose of this paper is to help an organization with the operational aspects of OpenShift as well as workloads running on top of it on Day 2. The premise of this paper is to describe core principles/processes that should be taken on Day 2. We will discuss definitions of Day 0, Day 1 and Day 2 tasks from both platform perspective and application perspective. Most of the papers/articles/documentations which talked about Day 2 operations are missing to talk about Personas for Day 2 operations. In our paper, we are focusing on a modern set of personas, namely the DevOps Engineers and the Site Reliability Engineers. We will introduce the structure of the Day 2 Reference Architecture for OpenShift and discuss it from several different aspects. An important topic around Day 2 operations is how we automate Day 2 operations. One of the solution for that topic is the Operators which was introduced late 2016. The Operators have been held a prominent position of "automated day 2 operations" topic, and we will discuss it. We will also discuss the differences between OpenShift and Kubernetes from Day 2 Operations point of view.

10:00-11:00 Session 3A
10:00
Matanya Moses (IBM, Israel)
Changing architecture: changing aircraft wings in flight

ABSTRACT. The Problem

Our architecture was aging, didn't stand the load and performance needed and costed a fortune

Purpose

This story captures our journey of changing the architecture of one of our products, named IBM Trusteer Pinpoint. The architectural change had to be done live on production for reasons I will cover in the talk and was needed as we got to the capacity limits of AWS RDS as well as the performance of our API's was degrading over time. Operating the service was also very costly due to those constrains. The talk will tell our story from the performance point of view and how you can change a mission critical system while in service without impacting your clients. I will demonstrate our thinking, prototyping, trailing and erroring, testing and validating our approach and finally our road to changing our aircraft's wings while in flight serving live users - i.e changing a database live in production.

Methods

We migrated from aws RDS to cassandra in production, and the talk will cover the technicalities, the processes and some tooling along the line. We serve 1.5 Billion requests per day, so the migration had to be done as seamless as possible, to prevent interruption. We used some opensource tools such as cassandra, nginx and spark, parquet format, along with AWS tooling such as kinesis, s3, EMR, and other tools. The talk will cover the architectural decisions as the product evolved.

The main goal of the talk is to demonstrate two things: you need to make tough decisions because of performance limitations, but you should do your engineering well enough so it won't impact your users.

10:00-11:00 Session 3B
10:00
Haytham Elnahas (IBM, Egypt)
Haytham Elkhoja (IBM, UAE)
Addressing the elephant in the cluster: Highlights of the AoT OpenShift Design - Resiliency Squad findings
PRESENTER: Haytham Elnahas

ABSTRACT. In this session, we'll discuss the output of the AoT OpenShift Design - Resiliency Squad and its recommendations We'll go through the following points: - Is DR still a valid option for OpenShift, in the traditional sense? - What to do and not to do regarding Openshift high availability - Discuss workloads, including the Active/Active and Active/Passive discussion from a workload perspective - How to backup the Openshift elements - How to backup workloads and data

12:00-13:00 Session 5A
12:00
Benjamin Walterscheid (IBM, Germany)
Digital Twin Operations (DTOps) by edge and cloud computing

ABSTRACT. Driven by recent advancements in the emergence of data processing, acquisition (IoT) and analysis technologies, the era of data-driven products has arrived and traditional enterprise IT landscapes have been reshaped. Data is generated throughout the product development process in a virtual environment (computer-aided engineering, PLM) and continues whenever a product is used in the physical world. Despite this continuous generation of data, there is no exchange of information between virtual and physical systems. Both are considered in isolation from each other and as a result of this, there is no stateful representation of a product. Therefore, this work introduces Digital Twin Operations (DTOps), a perpetually evolving concept which places an emphasis on the interlocking of virtual and physical systems to achieve a complete virtual mapping of a physical object using continuous and pervasive real-time data exchange combined with the capability of cloud and edge computing. For manufactures, DTOps supports the development of data-driven products. The utilisation of significant amounts of data from sensors help to reduce product failures and mitigate undesirable system behaviour before they exist. Within this lecture, the technology background for Digital Twin Operations will be explained. It will give an insight into the problems of traditional and isolated IT systems, how DTOps helps to overcome these issues in enterprise products by enabling the IoT world, which current challenges are faced and future opportunities which are in the pipeline.

12:00-13:00 Session 5B
12:00
Mark Buckwell (IBM United Kingdom Ltd, UK)
Multicloud Encryption built for a Resilience

ABSTRACT. As organisations move to embrace Cloud services, their reliance on security and resilience is increasing. Security is becoming integrated through tight automation throughout the lifecycle of a system making security services business critical.

Encryption and key management is one of the most challenging areas to get right without interfering with the resilience and operation of systems. It is also too easy to define a solution that does not protect the data or specify components that do not integrate. In the worst case, the data being protected can be irretrievably lost or cause severe interruption to the operation of a business.

The effective implementation of encryption to protect sensitive data needs to be better understood to meet GDPR and HIPAA requirements. This presentation starts with discussing the risks driving the protection of sensitive data using encryption and a proposed strategy for use of data-at-rest encryption.

The core components and standards of an encryption and key management solution are then discussed with experiences of developing architecture patterns to meet security, resilience and operational requirements. A solution for a global hybrid multicloud environment, based on a client solution, will be discussed with the architecture decisions made supporting ongoing security, resilience and operation of the service. The key constraints that influenced the design will be discussed.

The session should leave attendees with an appreciation of the challenge with integrating encryption and key management into an enterprise and the architecture decisions to avoid enabling a solution that maintains the resilience of an organisation.

13:00-14:00 Session 6A
13:00
Rafal Szypulka (IBM Garage Solution Engineering / IBM Cloud, Poland)
Build to Manage

ABSTRACT. Continuous Deployment is a key theme in the cloud world, which means that Operations have significantly less time to build the required knowledge, and the opportunity to apply this knowledge is much shorter. Therefore, we need a different approach to management. Instead of Operations figuring out their tasks in isolation, Operations works with Development in order to determine how to manage the application. Build to Manage is a new approach to development and operations that specifies the practice of activities developers can do in order to instrument the application and provide manageability aspects as part of an application release.

13:00-14:00 Session 6B
13:00
Suman Athuluri (IBM, India)
Neshoo Kachroo (IBM, India)
Himika Gupta (IBM, India)
Capacity and Performance Testing on RedHat Openshift Container Platform
PRESENTER: Suman Athuluri

ABSTRACT. Introduction: As a part of emerging technology, legacy applications are moving to cloud and there is various reason behind it. In locally hosted applications, company needs to maintain infrastructure at its own. To ensure application will be able to hold increased number of concurrent users in subsequent years, there should be enough hardware resources available which leads to high cost. RedHat OpenShift container platform enables efficient container orchestration, allowing rapid container provisioning, deploying, scaling, and management at low cost. That’s why OpenShift Container platform is more on demand in the market.

Problem Statement: OpenShift Container Platform has a microservices-based architecture of smaller, decoupled units that work together. It runs on top of a Kubernetes cluster, with data about the objects stored in etcd, a reliable clustered key-value store. A node provides the runtime environments for containers. Each node in a Kubernetes cluster has the required services to be managed by the master. Nodes also have the required services to run pods, including the Docker service, a kubelet, and a service proxy.

There are limits for objects in OCP (OpenShift Container Platform). For example, in large clusters maximum number of nodes can be 2,000. Similarly, there is a limit on number of pods per nodes. In most cases, exceeding these thresolds results in lower overall performance.

Solution/Approach: While planning environment, need to determine how many pods are expected to fit per node. The number of pods expected to fit on a node is dependent on the application itself with the application’s memory, CPU, and storage requirements.

With the nodes configuration there are couple of open sources as well as licensed tool such as JMeter and LoadRunner respectively that can be used to do capacity and performance testing to avoid future risks of low performance.

Conclusion: RedHat OpenShift Container is highly on demand in the market as it accelerates development, easy to migrate container process to new operating system with low cost. But as everything has its own advantage and disadvantage, there is limit on objects based on the cluster size that impacts the performance of application.

16:00-17:00 Session 7: Keynote: Resiliency and Business Update Panel Discussion
16:00
Andrea Sayles (IBM, United States)
Allen Downs (GTS, United States)
B.J. Klingenberg (GTS, United States)
Keynote: Resiliency and Business Update Panel Discussion
PRESENTER: Andrea Sayles

ABSTRACT. Keynote

17:00-18:00 Session 8A
17:00
Vijaya Bashyam (IBM, United States)
Lokesh Murthy (IBM, India)
Deploy, Optimize, Scale, Perform with Continuous Deliveries - How IBM Sterling Order Management makes it happen
PRESENTER: Vijaya Bashyam

ABSTRACT. How can I run my enterprise order management application in a scalable, performant manner with less overhead, increased portability and overall reduce the TCO in an on-premise or an hybrid environment? How do I tailor my application to various cloud providers but at the same time able to achieve my ever growing business needs? How do I keep up with the ever evolving cloud applications and open source technologies without breaking my budget? IBM Sterling Order Management provides an answer to all of these using their enterprise production containers deployable in Red Hat OpenShift. Customers business needs continue to grow at a much faster rate with end users expecting anywhere anytime approach as a result driving innovation into deployment and automation tools. In this session we will learn on how a product team handled these challenges and provided a one-stop solution to significantly reduce cost and improve DevOps strategies for enterprise-scale docker container environments with the use of Red Hat OpenShift container Platform. Bringing up a quick developer instance in a laptop using open source tools to setting up a fully auto scalable production environment will be addressed in this session along with how customers can integrate the container based architecture in a hybrid model across cloud platforms and integrate with other SaaS applications.

17:00-18:00 Session 8B
17:00
Eduardo Patrocinio (IBM, United States)
Achieving Application Resilience using MCM Capabilities

ABSTRACT. In this session, I will describe how IBM Multicloud Manager can be used to achieve resilience for an application.

I will start with a simple application, deploying manually in multiple clusters. Then, I will present the challenge to have application consistency and a solution on how to achieve it using MCM capabilities.

The audience will learn how to use MCM concepts to have a consistent and resilient application.

18:00-19:00 Session 9A
18:00
Kevin Green (IBM, United States)
CSMO - a modern approach to IT Service Management

ABSTRACT. As enterprises modernize their practices(ie DevOps) and platforms(ie Cloud), they need more than SRE(Site Reliability Engineering) on the service management front to be successful. For that IBM has established Cloud Service Management and Operations. In this session attendees will hear how CSMO links SRE and ITSM to make our IBM specific brand of clients successful. As CSMO takes into account the hybrid environment that the majority of clients find themselves heading towards. The CSMO team has helped many clients struggling modernization because Ops had not been a target of their modernization. Attendees will know how IBM can help not only modernize their application environment but also their service management environment. We have several client scenarios that we will prepare and walk clients thru to help clients identify of where they are on the spectrum of modernizing their Service Management.

submitted Bio previously on May 4th. Submitted presentation on May 30th

18:00-19:00 Session 9B
18:00
Himika Gupta (IBM, India)
Simran Solanki (IBM, India)
Performance Testing Strategies for BIG DATA Applications
PRESENTER: Himika Gupta

ABSTRACT. Introduction:

Big Data is one of the areas where all IT organizations are expanding and going deep dive in it to get the technologies to handle the large amount of data with speed and accuracy. As objective is not only to process the voluminous data but maintain the speed and security as well. If speed comes into the picture, then “performance testing” is the thing that we need to focus on along with the data processing.

Problem Statement:

Performance testing of Big Data is challenging as it is composed of different technologies (Hadoop,NoSQL, MapReduce). So single tool is not enough to test all the components. As Big Data basically deals with large amount of data which means large amount of test data is required. The absence of robust test data management strategies and a lack of performance testing tools within many IT organizations make big data testing one of the most perplexing technical propositions that business encounters. Also, Replicate the PROD environment is sometimes difficult and require more cost.

Solution/Approach:

Big data is defined as a collection of very large amount of data which can be structured, Unstructured or Semi structured and it cannot be processed using traditional computing techniques. So, testing of these kinds of datasets require new technologies, tools and techniques.

Big data can be explained with the help of 4 V’s – Velocity, Variety and Volume, Veracity which is the speed, kinds and amount and accuracy of data being fetched or uploaded and to make big data testing strategy effective all the components should be tested and monitored properly. This paper will elaborate the approaches to test the 4 above dimensions with different tools like YCSB (Yahoo Cloud Servicing Benchmark), Loadrunner (with AMPQ and STOMP benchmarks), JMeter, Hadoop Benchmarks etc.

Conclusion:

Big data testing can be done effectively if all the V’s of big data are tested. There are lot of testing techniques which can be applied to obtain results for response time, maximum user data capacity size, GUI and customer requirements for data. Since big data is a collection of structured, semi-structured and unstructured data so the testing solution needs to be selected based on the complexity of data. Performance testing for big data comes with many challenges such as diversity of data, variety of technologies used, volume of data. The traditional performance benchmarking methods are not enough for the NoSQL databases due to the changes in fault tolerance and error recovery methods,load distribution, and many more factors. For Big Data, enterprises need to test all the performance critical areas like data ingestion, throughput, etc. One important focus area is to test the performance of underlying NoSQL database for scalability and reliability.

In this paper, we will describe all the challenges we can face during Big Data performance testing. Also, we will investigate the existing tools and solutions to do performance testing on Big Data.

18:30
Vijayanand Kuppurao (IBM Corporation, United States)
Satheeshbabu K (IBM India PVT Limited, India)
Blockchain - Hyperledger - Performance Evaluation
PRESENTER: Satheeshbabu K

ABSTRACT. Blockchain Hyperledger Performance Assurance

Blockchain is distributed database predominantly implemented without a central authority and central repository. A blockchain withstands issues and phishes by using redundant checking at multiple nodes across the network without any coordination. This redundant checking increases the resiliency for blockchain nodes but increased complex in terms of performance, when the blockchain size is huge. This paper details about the Performance assurance best practices , metrics and unique performance attributes.

IBM Solution is to focus on the Performance evaluation by measuring the performance of a system under test. This evaluation validates system-wide response time or latency and calculates the time to write a block to persistent storage. The aim of any performance evaluation is to understand and document the performance of the system being tested. This involves measuring the outcomes when dependent variables are changed.

Blockchain performance evaluation and capturing the associated metrics is the necessary first step to define and benchmark blockchain system performance.

Performance Evaluation configuration comprises of Test Harness (Load injector to multiple nodes – in Performance terms Controller), Client (Load generators that submits automated user transactions to the nodes, Observing client that captures the response from the System Under test) and nodes are the system under test that are interconnected in a blockchain network. Please refer architecture diagram in Slide 5.

Some of the common block chain terms are Consensus (distributed network transactions), Commit (Transaction written to a database), Finality (Transaction committed or saved in Database), Network size, query, reads, state and global state, transaction.

Blockchain performance metrics are Read Latency = Time when response received – submit time , Read Throughput = Total read operations / total time in seconds , Transaction Latency = (Confirmation time @ network threshold) – submit time , Transaction Throughput = Total committed transactions / total time in seconds. Test results can be independently reproducible. All the environment parameters and test software, including any workload, should be identified properly to define and evaluate the blockchain performance.

Blockchain Hyperledger Performance Evaluation helps in identifying Transaction characteristics that will lead to robust performance prediction with satisfactory results.

Performance Evaluation helps to predict

• Complexity: Compute-Intense capability and Smart contract complex features are evaluated • Data access patterns: Production like emulation of data reads and writes to mimic the production scenario under load • Dependencies: Evaluate and demonstrate the transaction and data modelling and dependencies • Size: Evaluate the size of the transactions thus identifying the data chunks, latency, network band width with performance predictions for a live environment.

20:00-21:00 Session 11A
20:00
Veng Ly (IBM, United States)
Fimy Hu (IBM, United States)
Seewah Chan (IBM, United States)
Paul Lekkas (IBM, Germany)
Performance Evaluation of Server Consolidation on IBM Z
PRESENTER: Fimy Hu

ABSTRACT. Abstract -------- The IBM Z platform is designed and built to offer high security, availability, and scalability for enterprise applications. IBM Z can leverage these strengths, along with its virtualization and internal communication technologies, to consolidate complex, distributed, multi-tier workload environments into a single physical system.

Co-location of multiple servers onto a single Z processor can provide many potential benefits such as enhanced security through reducing external communication, improved network latency and throughput, and lower costs and complexity of system management. An increasing number of IBM customers are taking advantage of these benefits and consolidating their systems onto IBM Z.

Understanding basic system performance and proper system configuration are important initial steps to implement server consolidation. It can lead to better response time and customer satisfaction in addition to lower total cost of ownership.

This lecture session will introduce the basics of performance tuning, the common key performance indicators (KPIs), and the performance testing methodologies used by our Z Performance team. The session will also give an in-depth look of various co-located configurations on IBM Z environment. Common system configuration considerations will be explored through a series of test scenarios and examples from our Linux on IBM Z Application Server performance study. In this study, various configuration options were evaluated to determine which provide greater performance benefits. For example, comparisons were made between Open Systems Adapter (OSA) Ethernet and HiperSockets, virtual and native Linux systems, SMT enabled or not, etc.

Attendees of this session will be able to understand and apply the basics of performance tuning, identify potential workload bottlenecks, and make better decisions on the system configuration.

Author Biographic Information ------------------------------ Paul Lekkas (slekka@de.ibm.com) is an IBM Distinguished Engineer working for the IBM Systems Unit. He is interested in processor performance and design to support very large databases and ERP applications and helps IBM clients to introduce new systems and applications. Paul holds a PhD in nuclear physics.

Seewah Chan (seewah@us.ibm.com) is a Senior Software Engineer at IBM in Poughkeepsie, NY. Since 1996, he has been a member of the SAP on Z performance team. Seewah holds a Bachelor of Science degree in Physics, Mathematics, and Computer Science from SUNY Albany.

Veng Ly (vengly@us.ibm.com) is a Software Engineer at IBM in Poughkeepsie, NY. Since 1998, he has been a member of the SAP on Z team. He holds a Bachelor of Electrical Engineering from the City College of New York and a Master of Science from Polytechnic University.

Fimy Hu (fshu@us.ibm.com) is a Software Engineer at IBM in Poughkeepsie, NY. He has been a member of the SAP on Z team since 2015. He holds a Master of Software Engineering from the New Jersey Institute of Technology

20:00-21:00 Session 11B
20:00
Tbd Tbd (Client?, United States)
SRE Transformation - Real Life experience and results (CEMEX)

ABSTRACT. Original description by Ray. Will need to be updated by new speakers

In this Experience Sharing session, I will present on my experiences helping the CEMEX (GTS) Digital Operations team transform to an SRE way of doing things. Transformation is truly a journey, this project has expanded over two years. I will cover the following topics Discovery Roadmap Monitoring Enhancements Improvements with existing tools Central Logging Implementation using ElasticStack Dashboards with Grafana Implementation of a rigid Root Cause Analysis Process Incident Commander Pager Duty to to manage on call rotation ChatOps Culture Change

This session will be a participative session. Attendees should come away with some ideas/tips on SRE Improvements. BIO Ray Stoner used to work in the IBM Garage as a Solution Engineer with a focus on Cloud Service Management and Operations Mr. Stoner is certified as an IBM Technical Specialist Level 3 Thought Leader.He has helped many customers transform to working with cloud approaches to monitoring, application management and SRE practices.Mr. Stoner joined IBM in 2004as an IT Specialist for Enterprise Systems Management. He has worked across virtually every industry,assisting IBM customers with their Service Management requirements, from strategy to architecture to deployment. He isas leader for Service Management in cloud environments. With over25 years of enterprise experience, Mr. Stoner is established as a “go-to”resource within IBM for operations and service management. On a personal side, Ray is happily married to his wife Terri for over25 year sand a proud father of both a son and daughter. Rays hobbies are cooking, BBQing, baking, sports and hardrock music. Ray also enjoys photography and pony cars.

21:00-22:00 Session 12A
21:00
Sandeep Mangalath (IBM, India)
Jyoti Chawla (IBM, United States)
Protecting Operator managed stateful apps with Velero

ABSTRACT. Open Source Project Velero has picked up pace with its integration into VMware's Tanzu Mission Critical portfolio. This session provides an  opinionated  view of Velero, covering the state of the project, its applicability to protecting Stateful applications, its strengths and limitations as being a Kubernetes aware backup and restore service. This will be followed by a hands-on walkthrough of protecting an Operator managed stateful application running on Kubernetes cluster using Velero. This will give an experiential demonstration of Velero in the context of a real world enterprise application. The participants will walk away with an understanding of Velero; what it is, when to use it and how to use it.

 

45 mins talk/demo 15 mins Q&A

21:00-22:00 Session 12B
21:00
Surya Duggirala (IBM Cloud, United States)
Stefan Liesche (IBM Systems, Germany)
Architecture and Performance of Z platform for Cloud Workloads
PRESENTER: Surya Duggirala

ABSTRACT. There are many critical customer workloads that are deployed on IBM Z platform. As part of the digital journey, many customers are looking for deploying their cloud native applications on Z and Linux ONE and also to modernize their existing mainframe applications. Z as a platform is also transforming to support cloud native workloads and to enhance the differentiating quality of cloud services in IBM’s public cloud. In this session, we will explore the Hyper Protect 1.0 design, give an outlook on the new Hyper Protect 2.0 design and various features offered by the platform to support cloud workloads exploiting the unique capabilities like confidential computing and data privacy, built-in security, resiliency I/O performance and more. We will discuss Hyper Protect Crypto Services that are providing the KYOK capability design and performance