View: session overviewtalk overview
Keynote: Resiliency and Business Update Panel Discussion
Day 1
09:00 | Conducting a resilience and scalability assessment I - method, tools and outcomes PRESENTER: Andrew Roden ABSTRACT. In this session we will introduce an intuitive, easy-to-learn and easy-to-use method to conduct a resilience and scalability assessment. The method is based upon IBM's Resilience & Performance Engineering and Management Method. The interactive nature of the workshop-based method enables fast information collection and can be extended as desired with deep dives (application profiling; resilience or performance testing; source code analysis). The method is supported by two RADAR tools that visualize the gaps that need to be closed. The NFR RADAR provides an overview of the current and desired capabilities of the solution as a whole to meet the NFRs; the Solution Scalability RADAR highlights the current and desired scalability at each layer of the solution stack. Templates to document observations and recommendations both in presentation and in document form are available. Case studies in which this method have been successfully used will be briefly discussed. Bio Lydia Duijvestijn Ms. Duijvestijn is an executive IT Architect and Performance Engineer within IBM GBS BeNeLux. She is a member of the IBM Academy of Technology leadership team and c-leads the worldwide community of practice for performance and capacity. She led a large number of engagements with customers in the area of design for performance, performance testing and performance troubleshooting. She has been teaching the IBM Universal Method Framework classes for operational modelling (infrastructure design) and architecting for performance for over a decade, both in the region BeNeLux and abroad and was a speaker at several IBM internal and external conferences about subjects related to IT architecture and performance. Bio Andrew Roden Andrew is an experienced Architect in the Complex Solutions Integration and Architecture practice with GBS UKI who specialises in Resilience and Performance Architecture and is part of the Leadership team for the Worldwide Performance and Availability Community of Practice as well as the Co-Lead for the 2020 STEM Technology Council. Andrew has a broad range of experience across Software, Infrastructure and Cloud as well as verticals including Communications, Automotive, Retail, Public Sector, Industrial and IoT and Financial Services. |
10:00 | Addressing the elephant in the cluster: Highlights of the AoT OpenShift Design - Resiliency Squad findings PRESENTER: Haytham Elnahas ABSTRACT. In this session, we'll discuss the output of the AoT OpenShift Design - Resiliency Squad and its recommendations We'll go through the following points: - Is DR still a valid option for OpenShift, in the traditional sense? - What to do and not to do regarding Openshift high availability - Discuss workloads, including the Active/Active and Active/Passive discussion from a workload perspective - How to backup the Openshift elements - How to backup workloads and data |
13:00 | Capacity and Performance Testing on RedHat Openshift Container Platform PRESENTER: Suman Athuluri ABSTRACT. Introduction: As a part of emerging technology, legacy applications are moving to cloud and there is various reason behind it. In locally hosted applications, company needs to maintain infrastructure at its own. To ensure application will be able to hold increased number of concurrent users in subsequent years, there should be enough hardware resources available which leads to high cost. RedHat OpenShift container platform enables efficient container orchestration, allowing rapid container provisioning, deploying, scaling, and management at low cost. That’s why OpenShift Container platform is more on demand in the market. Problem Statement: OpenShift Container Platform has a microservices-based architecture of smaller, decoupled units that work together. It runs on top of a Kubernetes cluster, with data about the objects stored in etcd, a reliable clustered key-value store. A node provides the runtime environments for containers. Each node in a Kubernetes cluster has the required services to be managed by the master. Nodes also have the required services to run pods, including the Docker service, a kubelet, and a service proxy. There are limits for objects in OCP (OpenShift Container Platform). For example, in large clusters maximum number of nodes can be 2,000. Similarly, there is a limit on number of pods per nodes. In most cases, exceeding these thresolds results in lower overall performance. Solution/Approach: While planning environment, need to determine how many pods are expected to fit per node. The number of pods expected to fit on a node is dependent on the application itself with the application’s memory, CPU, and storage requirements. With the nodes configuration there are couple of open sources as well as licensed tool such as JMeter and LoadRunner respectively that can be used to do capacity and performance testing to avoid future risks of low performance. Conclusion: RedHat OpenShift Container is highly on demand in the market as it accelerates development, easy to migrate container process to new operating system with low cost. But as everything has its own advantage and disadvantage, there is limit on objects based on the cluster size that impacts the performance of application. |
16:00 | Keynote: Resiliency and Business Update Panel Discussion PRESENTER: Andrea Sayles ABSTRACT. Keynote |
17:00 | Deploy, Optimize, Scale, Perform with Continuous Deliveries - How IBM Sterling Order Management makes it happen PRESENTER: Vijaya Bashyam ABSTRACT. How can I run my enterprise order management application in a scalable, performant manner with less overhead, increased portability and overall reduce the TCO in an on-premise or an hybrid environment? How do I tailor my application to various cloud providers but at the same time able to achieve my ever growing business needs? How do I keep up with the ever evolving cloud applications and open source technologies without breaking my budget? IBM Sterling Order Management provides an answer to all of these using their enterprise production containers deployable in Red Hat OpenShift. Customers business needs continue to grow at a much faster rate with end users expecting anywhere anytime approach as a result driving innovation into deployment and automation tools. In this session we will learn on how a product team handled these challenges and provided a one-stop solution to significantly reduce cost and improve DevOps strategies for enterprise-scale docker container environments with the use of Red Hat OpenShift container Platform. Bringing up a quick developer instance in a laptop using open source tools to setting up a fully auto scalable production environment will be addressed in this session along with how customers can integrate the container based architecture in a hybrid model across cloud platforms and integrate with other SaaS applications. |
18:00 | Performance Testing Strategies for BIG DATA Applications PRESENTER: Himika Gupta ABSTRACT. Introduction: Big Data is one of the areas where all IT organizations are expanding and going deep dive in it to get the technologies to handle the large amount of data with speed and accuracy. As objective is not only to process the voluminous data but maintain the speed and security as well. If speed comes into the picture, then “performance testing” is the thing that we need to focus on along with the data processing. Problem Statement: Performance testing of Big Data is challenging as it is composed of different technologies (Hadoop,NoSQL, MapReduce). So single tool is not enough to test all the components. As Big Data basically deals with large amount of data which means large amount of test data is required. The absence of robust test data management strategies and a lack of performance testing tools within many IT organizations make big data testing one of the most perplexing technical propositions that business encounters. Also, Replicate the PROD environment is sometimes difficult and require more cost. Solution/Approach: Big data is defined as a collection of very large amount of data which can be structured, Unstructured or Semi structured and it cannot be processed using traditional computing techniques. So, testing of these kinds of datasets require new technologies, tools and techniques. Big data can be explained with the help of 4 V’s – Velocity, Variety and Volume, Veracity which is the speed, kinds and amount and accuracy of data being fetched or uploaded and to make big data testing strategy effective all the components should be tested and monitored properly. This paper will elaborate the approaches to test the 4 above dimensions with different tools like YCSB (Yahoo Cloud Servicing Benchmark), Loadrunner (with AMPQ and STOMP benchmarks), JMeter, Hadoop Benchmarks etc. Conclusion: Big data testing can be done effectively if all the V’s of big data are tested. There are lot of testing techniques which can be applied to obtain results for response time, maximum user data capacity size, GUI and customer requirements for data. Since big data is a collection of structured, semi-structured and unstructured data so the testing solution needs to be selected based on the complexity of data. Performance testing for big data comes with many challenges such as diversity of data, variety of technologies used, volume of data. The traditional performance benchmarking methods are not enough for the NoSQL databases due to the changes in fault tolerance and error recovery methods,load distribution, and many more factors. For Big Data, enterprises need to test all the performance critical areas like data ingestion, throughput, etc. One important focus area is to test the performance of underlying NoSQL database for scalability and reliability. In this paper, we will describe all the challenges we can face during Big Data performance testing. Also, we will investigate the existing tools and solutions to do performance testing on Big Data. |
18:30 | Blockchain - Hyperledger - Performance Evaluation PRESENTER: Satheeshbabu K ABSTRACT. Blockchain Hyperledger Performance Assurance Blockchain is distributed database predominantly implemented without a central authority and central repository. A blockchain withstands issues and phishes by using redundant checking at multiple nodes across the network without any coordination. This redundant checking increases the resiliency for blockchain nodes but increased complex in terms of performance, when the blockchain size is huge. This paper details about the Performance assurance best practices , metrics and unique performance attributes. IBM Solution is to focus on the Performance evaluation by measuring the performance of a system under test. This evaluation validates system-wide response time or latency and calculates the time to write a block to persistent storage. The aim of any performance evaluation is to understand and document the performance of the system being tested. This involves measuring the outcomes when dependent variables are changed. Blockchain performance evaluation and capturing the associated metrics is the necessary first step to define and benchmark blockchain system performance. Performance Evaluation configuration comprises of Test Harness (Load injector to multiple nodes – in Performance terms Controller), Client (Load generators that submits automated user transactions to the nodes, Observing client that captures the response from the System Under test) and nodes are the system under test that are interconnected in a blockchain network. Please refer architecture diagram in Slide 5. Some of the common block chain terms are Consensus (distributed network transactions), Commit (Transaction written to a database), Finality (Transaction committed or saved in Database), Network size, query, reads, state and global state, transaction. Blockchain performance metrics are Read Latency = Time when response received – submit time , Read Throughput = Total read operations / total time in seconds , Transaction Latency = (Confirmation time @ network threshold) – submit time , Transaction Throughput = Total committed transactions / total time in seconds. Test results can be independently reproducible. All the environment parameters and test software, including any workload, should be identified properly to define and evaluate the blockchain performance. Blockchain Hyperledger Performance Evaluation helps in identifying Transaction characteristics that will lead to robust performance prediction with satisfactory results. Performance Evaluation helps to predict • Complexity: Compute-Intense capability and Smart contract complex features are evaluated • Data access patterns: Production like emulation of data reads and writes to mimic the production scenario under load • Dependencies: Evaluate and demonstrate the transaction and data modelling and dependencies • Size: Evaluate the size of the transactions thus identifying the data chunks, latency, network band width with performance predictions for a live environment. |
20:00 | Performance Evaluation of Server Consolidation on IBM Z PRESENTER: Fimy Hu ABSTRACT. Abstract -------- The IBM Z platform is designed and built to offer high security, availability, and scalability for enterprise applications. IBM Z can leverage these strengths, along with its virtualization and internal communication technologies, to consolidate complex, distributed, multi-tier workload environments into a single physical system. Co-location of multiple servers onto a single Z processor can provide many potential benefits such as enhanced security through reducing external communication, improved network latency and throughput, and lower costs and complexity of system management. An increasing number of IBM customers are taking advantage of these benefits and consolidating their systems onto IBM Z. Understanding basic system performance and proper system configuration are important initial steps to implement server consolidation. It can lead to better response time and customer satisfaction in addition to lower total cost of ownership. This lecture session will introduce the basics of performance tuning, the common key performance indicators (KPIs), and the performance testing methodologies used by our Z Performance team. The session will also give an in-depth look of various co-located configurations on IBM Z environment. Common system configuration considerations will be explored through a series of test scenarios and examples from our Linux on IBM Z Application Server performance study. In this study, various configuration options were evaluated to determine which provide greater performance benefits. For example, comparisons were made between Open Systems Adapter (OSA) Ethernet and HiperSockets, virtual and native Linux systems, SMT enabled or not, etc. Attendees of this session will be able to understand and apply the basics of performance tuning, identify potential workload bottlenecks, and make better decisions on the system configuration. Author Biographic Information ------------------------------ Paul Lekkas (slekka@de.ibm.com) is an IBM Distinguished Engineer working for the IBM Systems Unit. He is interested in processor performance and design to support very large databases and ERP applications and helps IBM clients to introduce new systems and applications. Paul holds a PhD in nuclear physics. Seewah Chan (seewah@us.ibm.com) is a Senior Software Engineer at IBM in Poughkeepsie, NY. Since 1996, he has been a member of the SAP on Z performance team. Seewah holds a Bachelor of Science degree in Physics, Mathematics, and Computer Science from SUNY Albany. Veng Ly (vengly@us.ibm.com) is a Software Engineer at IBM in Poughkeepsie, NY. Since 1998, he has been a member of the SAP on Z team. He holds a Bachelor of Electrical Engineering from the City College of New York and a Master of Science from Polytechnic University. Fimy Hu (fshu@us.ibm.com) is a Software Engineer at IBM in Poughkeepsie, NY. He has been a member of the SAP on Z team since 2015. He holds a Master of Software Engineering from the New Jersey Institute of Technology |
21:00 | Architecture and Performance of Z platform for Cloud Workloads PRESENTER: Surya Duggirala ABSTRACT. There are many critical customer workloads that are deployed on IBM Z platform. As part of the digital journey, many customers are looking for deploying their cloud native applications on Z and Linux ONE and also to modernize their existing mainframe applications. Z as a platform is also transforming to support cloud native workloads and to enhance the differentiating quality of cloud services in IBM’s public cloud. In this session, we will explore the Hyper Protect 1.0 design, give an outlook on the new Hyper Protect 2.0 design and various features offered by the platform to support cloud workloads exploiting the unique capabilities like confidential computing and data privacy, built-in security, resiliency I/O performance and more. We will discuss Hyper Protect Crypto Services that are providing the KYOK capability design and performance |