PREVAIL 2020: IBM CONFERENCE ON PERFORMANCE ENGINEERING, AVAILABILITY AND SECURITY
PROGRAM FOR THURSDAY, SEPTEMBER 17TH
Days:
previous day
next day
all days

View: session overviewtalk overview

03:00-04:00 Session 32A
03:00
Swetha P J (IBM, India)
Jyothi Peddalingam (IBM, India)
Performance Testing of Home Automated IoT Devices

ABSTRACT. The Internet of Things (IoT) is a network of physical objects, vehicles, buildings or things, with embedded electronics, software, sensors, and network connectivity that enables these objects to collect and exchange data in many of sectors such as security, healthcare etc. Today's Home automation system which has many IoT devices is playing a crucial role in our life, which allows the user to control the home from his/her computer or Mobile phone and assign actions that should happen. The key requirement for businesses is delivering robust, high-quality IoT solutions quickly to the market. Testing large numbers of devices and which are continuously generating data poses significant challenges in terms of scale, velocity and variety for internal test teams. In this paper we see the key testing challenges, testing approaches and testing solutions for achieving better performance of home automated systems.

03:00-04:00 Session 32B
03:00
Vijay Vardhan (IBM Limited, India)
Performance Testing Using Automation Framework

ABSTRACT. The Purpose of this white paper is to highlight the Performance Testing using DevOps setup in Automation Framework.

Some of the key things that are addressed in this document: •Importance of shift left in performance testing •How IBM in-house tools(IBM Cloud, Workload Scheduler, Urban Code Deploy and Netcool) are effectively used in DevOps along with Open source tools like Jenkins, JMeter, TestNG and commercial tools like NewRelic and GitLab •Level of automation achieved in automating Performance test cases.

04:00-05:00 Session 33A
04:00
James Bradley (IBM GBS, Australia)
Mario Gaon (IBM GBS, Australia)
A Framework for Approaching Organisational Resilience

ABSTRACT. Resilience for an organisation involves much more than just maintaining availability of supporting systems. The challenges to an organisation’s resilience can take many diverse forms and threaten an organisation’s reason for being, let alone day-to-day operations. COVID-19 has been one recent example, but natural disasters, market shifts and technology advancement have all played a role in bringing down high profile businesses. This paper proposes a structured framework to defining how an organisation seeks to plan for, and approach how to handle, resilience challenges. It takes a business focussed view of an organisation’s desired relative operating level post-challenge compared to pre-challenge. Once an organisation is categorised, is it possible to derive appropriate strategies that can be applied across the organisation to prepare for both foreseen and unforeseen threats. As an entry point for resilience engineering at the organisational level, the framework can facilitate business opportunities across the entire business spectrum, with particular opportunities in Cloud Computing, Data & AI.

04:00-05:00 Session 33B
04:00
David Coleman (IBM, United States)
Cloud Availability

ABSTRACT. Is cloud availability something new? Don't the usual rules for availability design apply? Well, yes and no. The usual rules do apply, but the content is different. And interestingly, that means what is likely to fail changes quite a bit.

06:00-07:00 Session 35A
06:00
Jing Yu (CIO, China)
Making a secure transition to IBM public cloud

ABSTRACT. Distributed Software Commerce (DSWC) is the foundation of one of the largest and most essential Quote-to-Cash systems in IBM. As a "Business Critical/Financially Significant" system, DSWC’s portfolio drives over $14B in billed revenue annually, with billings exceeding $25M/hr during peak periods. From last year, DSWC started a journey to modernize all 60 applications and 79 services/interfaces, and moved them to IBM public cloud. This talk will share experience and practices about how we protect the whole DSWC portfolio on IBM public cloud from different aspects: Architecture Design, Application, Container, Data(SQL and noSQL databases), Operation(DevSecOps), Cloud platform, Network etc. After this presentation the audience will learn how to build a cybersecurity model on public cloud and get some real, on-the-ground experiences/practices. These experiences will defintely benefit other IBM portfolios as well as our clients.

06:00-07:00 Session 35B
06:00
Sukumar Ganapathy (IBM, India)
Test Data challenges in Performance Testing

ABSTRACT. Test Data challenges in Performance Testing Abstract Introduction Performance testing is significant to any web applications where the large number of end user will be going to use the application when the application goes live. Hence, before production rollout, a through performance testing need to be performed to evaluate the system performance under various workload condition and volumes as defined in the test plan to evaluate whether the system/Application is meeting the SLA (Response Time, Throughput and System Utilization) as defined in the plan. Test data plays a vital role to perform effective and accurate/realistic performance testing. However, in many situations getting right set up of test data or setup the test data by the performance testing team itself involves many challenges. This white paper describes various challenges involved the test data requirement for effective performance testing. Summary Modern day application involves multiple source and target system and there are multiple communication channels involved among them for the interaction. Performance testing involves careful identification of scenario and the workload volumetric details. Based on the identified scenario, performance test team involves on the business scenario validation and test data requirement activities. Based on the test data requirement need, the test team itself will create by their own if such provision exist. Otherwise, they will connect and raise a request with Test data team for provisioning of the data. However, provisioning of the test data involves multiple challenges and complexities. The below are some of the common challenges involved in test data provisioning for performance testing requirement.  Test data is not readily available to use  Test data comes from different source system and performance testing team have limited knowledge or access to get the data from source system  Performance testing team relies on Test data team and test data team will take longer time provisioning the data  Support resources are not available or occupied with other priority works and unable to provision the data on time.  Test data creation involves time constraint activities and in case of test required lot of non-reusable data  In case performance testing required lot of non-reusable data (destructive data) then test data creation involves lot of time and efforts.  Performance testing team have limited knowledge on the system due its complexity and, they have restricted permission and knowledge to mine the data from the source system by themselves. Conclusion Test data requirement will be evaluated as soon as the performance testing requirements have been finalized. Based on the workload condition and types of testing to be performed, test data volumes will be identified, and appropriate request will be placed to the test data team to provision the data. All the challenges involved in the test data provisioning related are documented in the test plan as a Risk/Mitigation section of the plan. Notify the higher management team on challenges/ risk involved on the test data requirement so that they will be aware of the issues/challenges.

06:30
Nithin S N (IBM, India)
WORKLOAD MODELING FOR PERFORMANCE TESTING USING RPA

ABSTRACT. Performance testing is in general a testing practice performed to determine how a system performs in terms of responsiveness and stability under a various workload. Performance testing plays a very critical part of system going live. Customers/stakeholders are getting absolute confidence based on the software product they are going to use for their business based on the performance testing results. In the Past history many live applications are taken out due to the system not able to behave as expected with its functionalities when the system is under various high load in different time intervals. When we do performance testing with proper workload modelling can eliminate the above issue in the production. A workload model is designed to identify key test scenarios and load distribution across these scenarios. Performance test results can never be reliable unless its workload model is designed accurately. Workload modelling for performance testing has been generally performed by manually capturing the details like list of very frequently performed critical transactions, number of call/hours, number of concurrent users running from the log or details from business. And it’s an obviously time-consuming job. With the use of Robotic Process Automation (RPA) we can overcome the drawbacks of manual hurdles workload model with a futuristic reusable and reliable one, about which I discussed in following sections.

07:00-08:00 Session 36A
07:00
Harnessing the AgileTechniques in the Perfromance Engineering_Whitepaper

ABSTRACT. Agile performance engineering advocates using a proactive approach to plan and execute performance activities rather than using a reactive approach like postmortem. Organizations or project teams who have adopted this approach have seen efficiency of performance engineering increasing significantly. Industry proven methods and knowledge of processes, practices and tools that can help increase the efficiency of project performance engineering efforts. The Agile approach when used appropriately, is effective in solving the downside of the Waterfall model. Performance engineering when executed using the agile model becomes an iterative and collaborated process with the development of the application. Approaching performance engineering with an Agile, iterative approach helps in meeting product performance goals faster and more efficiently. The Agile approach in performance engineering offers many more benefits than the traditional Waterfall approach. The most important benefit is that it saves costs of fixing the performance issues later in the development cycle. The performance issues are identified early in the agile development cycle and can therefore be fixed in time before they can have a major impact on the application. The final product is a well performing application.

07:00-08:00 Session 36B
07:00
Jyothi Peddalingam (IBM India pvt ltd, India)
Network Virtualization Concept for Performance Testing

ABSTRACT. A change is hitting Information Technology (IT) solutions exponentially with the global economic calamity require IT to do more with less. Applications, when it is deployed in live; faces many challenges and one of the main challenge is, Performance. Performance of an application depends on many factors and the one which is recently focused is network performance which is a major issue in all applications. Metrics of the network emulates the performance measure of an application. These metrics should be taken care well to improve and one of the solution which we can improve is virtualization of networks. CPU and memory utilization are the metrics which is more looked upon for deployment of any applications.

In this paper, we share the idea of how this perfect storm could impact IT practices and has successfully embraced today’s new, virtualized environment for applications and we determine how the combination of HP LoadRunner for performance testing with NV tools for WAN/LAN emulation can dramatically increase test efficiency.

08:00-09:00 Session 37: Keynote: Double Trouble: how Operations Risk Insights mitigates the pandemic and global cyclone threats
08:00
Thomas Ward (IBM Chief Data Office, United States)
Rahul Nahar (IBM Chief Analytics Office (CAO), United States)
Chester Karwatowski (IBM Chief Information Office (CIO), United States)
Double Trouble: how Operations Risk Insights mitigates the pandemic and global cyclone threats
PRESENTER: Thomas Ward

ABSTRACT. Operations Risk Insights (ORI) identifies severe natural and man made disaster, so IBMers and disaster relief partners can quickly recover when the most vulnerable locations and populations are impacted across the US and globally. ORI monitors and mitigates risks from all significant global disasters including: the COVID-19 Pandemic, Global Cyclones (tropical storms, typhoons and hurricanes), Flooding, Earthquakes, Wildfires and even man-made crisis like Social Unrest or Labor Strikes.

ORI now runs on the RedHat Openshift platform for hybrid, multi-cloud integration for IBM, our disaster relief partners and clients. ORI is open to all IBMers and used regularly by over 1200 employees for Business Continuity Planning (BCP) and our IBM Global Crisis Management Team (CMT) for recovery. Furthermore, in partnership with the Corporate Social Responsibility (CSR) team and the Call for Code - ORI is used weekly by 6 disaster relief non-profit groups including Save the Children. For more information, you can review the following IBM article: https://newsroom.ibm.com/ORI-nonprofits-disaster.

ORI runs on the open source, IBM Carbon Design UI/UX platform and uses geo-spatial visualization in alignment with The Weather Company (TWC): MapBox tiles and Pangea SDK for the best visualization of risks and user experience globally.

ORI was featured in the first COVID-19 all hands communication from our IBM CEO on March 9th. It was also called out in the IBM 2019 Corporate Social Responsibility report from the new IBM CEO in July as follows: https://www.ibm.org/responsibility/2019/case-studies/ibm-covid-19-volunteers.

ORI was featured in the 2019 PREVAIL session as a poster session and later selected as a keynote presentation. It was very well received. With COVID and accelerating global crisis, ORI usage and its essential benefits to IBM has exploded in the past year. ORI has delivered $15M in business benefit to IBM over the past 3 years and has strong interest from our clients as a hybrid multi-cloud, IoT, AI and big data analysis, Watson showcase.

IBM Business Continuity professionals sleep better at night, with ORI as their trusted SaaS providing the heavy lift of real time social media analysis from thousands of data sources hourly coupled with geo-spatial monitoring of over 25,000 global points of interest in near real time.

If ORI is not selected as a keynote - we are happy to participate in poster session or a panel discussion.

09:00-10:00 Session 38A
09:00
Haytham Elkhoja (IBM, UAE)
Principles of data integrity and transaction consistency in a multi-active cloud architecture

ABSTRACT. We will discuss and show how we can guarantee strong data integrity and consistency by relying on Event-driven Architectures and new Cloud-native patterns.

I present architectural methods and principles that were recently delivered in a large financial institution in Europe during their journey to an Always On / Multi-Active architecture.

In a microservices and Cloud-native world, where applications must be de-coupled from the underneath platform and hardware, architects and developers have to re-think how to achieve transactional consistency by modernizing applications to new patterns such as Saga, Outbox, Event Sourcing and relying on an Always On/Always Consistent Kafka Event Stream platform.

09:00-10:00 Session 38B
09:00
Aslam Shaik (IBM, UAE)
A Journey from IGNITE Performance Testing/Engineering Frameworks To Monitoring Console at Control Tower

ABSTRACT. The client, Etihad Airways is the second-largest airline in the UAE. Its head office is in Khalifa City, Abu Dhabi, near Abu Dhabi's International Airport. Etihad commenced operations in November 2003. There were more than 200 + applications in place and multiple vendors involved in development and maintenance. There were multiple instances where End users raised concerns on slowness on critical applications and support teams had challenges in isolating the bottleneck due to unavailability of historical data with respect to end to end performance stats. Existing tools at EY does not have capabilities to Pro-actively monitor & address system stability before the issue is reported by a User. This results in •Business Impact •Loss of Productivity  and •Poor End User Experience

IBM Approach - •IBM leveraged its IGNITE Performance Testing/Engineering frameworks and proposed a fully customized solution in-line with best practices to address the ongoing limitations. Some of the key highlights from the solution are: •Provides the ability to replicate the end user transactions and document the end user experience performance metrics •Automatically capture errors, crashes, page load details and other performance KPI metrics for an entire user session •Proactively manage key business transactions using synthetic monitoring methodologies •Sending alerts

Client value -

•IBM as a differentiate when compared to other competitors •Addressed the ongoing limitations with existing monitoring tools •A standard, streamlined and best practice IT Service Monitoring Model aligned to client’s core values •Commitment to defined SLA’s by  monitoring 24*7 •Productivity and service improvements •Problems proactively with real-time data •Awareness into performance issues and potential business impact before real user raises the issue

In this paper, lets discuss how the PT frameworks has been used to Pro-actively monitor Virtual servers from an end user perspective, Determining the slowness, Tracking end user behavior and raising alerts.

10:00-11:00 Session 39A
10:00
Haytham Elkhoja (IBM Services, UAE)
Availability in a Cloud-native world. Guidelines for mere mortals.

ABSTRACT. I present architectural methods, patterns and practices that are to be followed by developers, SREs and software architects when building and maintaining cloud-native applications and services that need to provide the highest levels of availability.

The methods describe how to provide *true* five nines (99.999%) for end to end business services by incorporating Site Reliability Engineering (SRE), DevOps, Microservices, Chaos Engineering, Cloud-native Architectures, Application Modernization, Multi-Availability Regions, Geo-dispersity, Data Consistency, Performance and Scalability, Content Delivery Networks (CDN), and Software-defined Environments (SDE).

10:00-11:00 Session 39B
10:00
Aslam Shaik (IBM, UAE)
IBM IGNITE Cloud Performance Testing/Engineering case study on Private Data Center Migration at Etihad Airways

ABSTRACT. Data Center migration is not just migrating the infrastructure from existing to new landscape but, keeping the business intact by ensuring that application performance is normal with no degradation in user experience and minimal down time. Effective Performance Testing Strategy and testing is of critical importance in realizing successful migration for Data Centre migration. In this session, we would like to share our experience how we have performed Testing by optimizing the performance in Private Data Center for 202 applications by simulating the realistic user load from 23 geographical locations across the globe.

12:00-13:00 Session 41A
12:00
Rik Lammers (IBM, Netherlands)
SAFe, Resilience and SRE

ABSTRACT. Objective of the lecture:Discussing resilience and SRE in the context of SAFe. Is it covered or not? How would SRE fit? How should one deal with resilience matters in a SAFe context?

SAFe, (Scaling Agile Framework, is a broadly used approach in the market, especially with our Clients. SAFe in itself is very good. DevOps is one of the foundational elements so the method supports the shift left move as well as the continuous need for quality and avoidance of technical debt. However emphasis is on business functionality and less on resilience and operational matters. This implies that some creativity is needed to ensure that infrastructure management and operations are covered as well.

Lecture will start with a short introduction of SAFe with a focus on NFR, infrastructure and management coverage. Lecture is largely the result of a large private cloud engagement(s) where the Client insisted on both, SAFe and SRE.

The author believes that SRE practices do require methods and frameworks just like other professions in IT. Take for example Developers (Scrum, RUP) or Architects (TOGAF, IBM UMF)  SAFe could be a fitting candidate.

12:00-13:00 Session 41B
12:00
Christopher Giblin (IBM Research, Switzerland)
Grant Miller (IBM F&O, United States)
Pascal Vetsch (IBM Research, Switzerland)
Andreas Wespi (IBM Research, Switzerland)
Monica Manni (IBM Research, Switzerland)
Dave Ryan (Global Chief Data Office Finance and Operations, United States)
Our Journey Towards Continuous Compliance

ABSTRACT. Continuous compliance is a popular, widely-used term. There is the common perception that implementing continuous compliance is a straightforward task thanks to the rich set of compliance products on the market. However, practitioners know that maintaining continuous compliance is a really tough problem. Each environment has its own specific challenges that require creative approaches for achieving and maintaining compliance. In this talk we present methods, tools, and processes that we have developed for the continuous compliance of the IBM GCDO Cognitive Enterprise Data Platform (CEDP)

An up-to-date asset inventory is at the core of any compliance solution. Many of the existing solutions rely on an inventory that is maintained within the solution itself. However, this hinders the interoperability of different compliance solutions because there is no longer a single master inventory. Our newly developed solution is based on a standalone, automatically maintained master inventory that integrates with IBM compliance solutions such as the IBM Shared Operational Services (SOS) and the IBM Mixed Address Database (MAD).

Compliance is a joint effort and relies on multiple stakeholders. Assessing the adherence of a system to IBM’s IT Security Standard (ITSS) is the responsibility of the system owners. However, they have to be empowered to do the assessments in an efficient and transparent way. Our ITSS compliance framework supports in an integrated fashion compliance assessments, the validation and collection of compliance data, and it feeds higher-level applications such as compliance dashboards. While initially developed for CEDP, it is applicable in any other environment striving for continuous compliance.

The presentation covers not only the newly developed tooling but also its management and the related compliance processes.

16:00-17:00 Session 43: Keynote
16:00
Rami Akkiraju (IBM, United States)
AI Ops?? To be added by Rami

ABSTRACT. To be added by Rami

17:00-18:00 Session 44A
17:00
Samir Nasser (IBM, United States)
Application Performance & Resiliency Tuning Approaches

ABSTRACT. A brief summary of performance & resiliency tuning approaches is presented. Then, a product-agnostic performance & resiliency tuning approach will be presented. This approach was used in all performance & resiliency tuning engagements the presenter has been engaged in. The approach does not focus on any particular product. However, it was used successfully on distributed solutions involving WebSphere Application Server, ODM Decision Server Events, ODM Decision Server Rules, Oracle, DB2, MQ, and DataPower, on both Linux RHEL and AIX. The performance & resiliency tuning approach is a top-down approach that starts with performance & resiliency requirements, the solution topology and the end-to-end request flows. The presenter will showcase a few real-world use cases where this tuning approach has been successfully employed.

17:00-18:00 Session 44B
17:00
Utpal Mangla (IBM, Canada)
Mathews Thomas (IBM, United States)
Sharath Prasad (IBM, United States)
Juel Raju (IBM, United States)
5G and Edge

ABSTRACT. Learning Objectives: Get a have a deeper understanding of 5G/edge from a performance, availability and security perspective

Expected Outcomes: Apply learnings to 5G and edge implementations they may be working on

Session Type: Innovative Point of View / Experience Sharing

Delivery Method: Lecture

Tag: Performance, Availability, Security

Customer: Multiple Global Telecom customers implementing 5G/edge

Desired amount of time:45 min

Recent investment and development of 5G has resulted in Edge Computing becoming a key technology to improve performance and availability. This session will discuss the following topics with a focus on improving performance, availability and reliability of solutions using 5G edge computing:

-High-level overview of 5G Edge and key performance, availability and security challenges for implementing an end-to-end edge solution from an application and network perspective

-5G Edge use cases from the Telecommunications industry that should be considered from a performance, scalability and security perspective

-Architecture overview and how key components including IBM Edge Application Manager (IEAM) and Telco Network Cloud Platform with associated tooling such as Agile Lifecycle Manager (ALM), Netcool Operations Insights (NOI) and Agile Service Manager (ASM) can improve the performance and resilience of solutions running AI and analytics workloads.

-Demonstration and details of an actual 5G implementation at the edge with multiple AI and analytics application workloads running on cloud pack at the MEC (Multi Access Edge Computing) and on multiple devices managed by IEAM and ACM. Integration of the above with Telco Network Cloud Platform using key 5G xNF’s spanning 5G Core, vRAN and IMS with functions such as 5G slicing with close loop automation to improve performance, availability and reliability will also demonstrated.

-Customer implementation examples, lessons learned from the above experience and key points to consider to be successful in this journey.

18:00-19:00 Session 45A
18:00
Alan Lee (IBM Canada Ltd., Canada)
DB2's new integrated cluster manager providing High Availability

ABSTRACT. The Db2 High Availability Feature which enables integration between Db2 server and Tivoli System Automation for Multiplatforms (a.k.a. TSA) has been the best practice for automated failover to HADR standby in the past 10+ years. It is popular, well-adopted and continues to attract many mission critical deployments world-wide. To align with the overall IBM cloud strategy while at the same time, address recent surge of requests to deploy this solution in non-containerized cloud environment and to resolve existing limitation in TSA, Pacemaker has been anointed the new "cluster manager of choice" going forward, with goals to enhance existing integration, further lower total cost of ownership and provide superior service than its predecessor. A bottom-up approach will be used in this session to introduce the new cluster manager infrastructure, follow by the setup, configuration, integration with Db2, upgrade procedure, operational differences, failure behaviors, troubleshooting / problem determination, and last but not least, a sneak peak into the future roadmap.

Objective 1: Understand the architecture difference between the Db2 High Availability feature with TSA Vs Pacemaker

Objective 2: Overview of the setup, upgrade, configuration with the cluster using the new infrastructure

Objective 3: Scenario Walkthoughs

18:00-19:00 Session 45B
18:00
Kevin Yu (IBM Toronto Software Lab: AI Applications, Canada)
Roxsana Vahdati-Moghaddam (IBM Toronto Software Lab: AI Applications, Canada)
Ingo Averdunk (IBM, Germany)
SRE Feature Delivery to Tackle Technical Debt

ABSTRACT. We've come a long way with SRE culture in most organizations that we no longer have to debate about the importance of SRE. However, organizations still struggle with prioritization of SRE tasks. This challenge leads to technical debt and results in negatively impact on user experience.

This session will take participants through how our organization, AI Applications is tackling the SRE technical debt challenge. Audience will be shown the step-wise-progression we took to enable SRE execution. Highlighted steps include how SRE execution is first broken down into features with measurable maturities. How those features are then driven into the engineering lifecycle as Epics and Stories. And lastly, how we learn from incident postmortems to feedback into the lifecycle.

The expected outcome is for the audience to learn how our organization, which includes some of the largest SaaS revenue offerings in IBM are tackling SRE technical debt. The benefit is to help drive more teams and organizations to treat SRE as features vs. non-functional and prioritized alongside all other features to improve user experience and business results.

19:00-20:00 Session 46: Keynote
19:00
Sam Lightstone (Data and AI, Canada)
KEYNOTE: AI for Performance Engineering

ABSTRACT. AI is disrupting system performance. In this session CTO and IBM Fellow Sam Lightstone will highlight some of the emerging AI technologies that are creating knew opportunities for performance engineering. From Machine Learning to Deep Learning to Neuromorphic computing, learn how performance engineering is entering a new era beyond the fundamentals of throughput and latency of compute, storage and network. Profound changes are occurring as every layer of the stack is increasingly infused with AI.

20:00-21:00 Session 47A
20:00
Dale McInnis (Global Markets, Canada)
DB2 Resiliency Models: what are Db2 customers really doing

ABSTRACT. In this presentation we will detail several DB2 resiliency patterns that customers have deployed. We will review the customer’s requirements, discuss alternatives and detail the final architectural chosen outlining both the pros and cons. Monitoring is key to success in building a resilient environment this we will detail what needs to be monitored.

20:00-21:00 Session 47B
20:00
Grant Miller (IBM Global Chief Data Office, United States)
Chris Giblin (IBM Research, Switzerland)
Andreas Wespi (IBM Research, Switzerland)
Ilya Hardzeenka (IBM GCDO, Belarus)
Implementing Policy-Based Access Control in a hybrid cloud datalake
PRESENTER: Grant Miller

ABSTRACT. This presentation will cover the evolution of access management for a complex and distributed set of cloud resources while building IBM’s internal data platform. Our key challenges were managing access across a large variety of disparate systems. We will discuss the advantages and disadvantages we encountered along the way, the best practice patterns we have found, and a recommendation on how to follow.

The IBM Cognitive Enterprise Data Platform (CEDP) provides access to enterprise data from across IBM for use in discovering insights and supporting AI in a hybrid cloud environment. CEDP is a data lake that spans across Public Cloud, Private Cloud, and on-prem. CEDP is built upon of various storage solutions along with data movement, transformation, indexing, and compute. Access to these data “systems” is granted to “users” being tools, applications and individuals. This data can often include extremely sensitive financial information, personal information etc. The data is regulated globally for who and how data can be accessed, especially for personal information and can include geographic restrictions, blackout period restrictions, and user nationality restrictions.

Our initial approach to access control was based on defining a set of privileges for users and granting those privileges directly against resources. Once granted, the user always has access, regardless of external constraints such as the physical location of the user. The solution included the use of Bluegroups, IAM, AccessHub, and a distributed policy management system unique to each resource. We are moving to a policy-based model of control that takes the attributes of the user, the data, and the platform. The new system leverages policy-based access control and can dynamically evaluate the access rules applied against those attributes. It also provides a centralized policy management approach that can be governed in real time.

21:00-22:00 Session 48B
21:00
Surya Duggirala (IBM Cloud, United States)
Moss Uchida (IBM Cloud, United States)
Designing and Deploying High Performing Enterprise Applications with IBM Cloud Paks
PRESENTER: Surya Duggirala

ABSTRACT. IBM Cloud Paks are enterprise-ready, containerized software solutions that will help move business applications seamlessly using open technologies and with focus on enterprise security. This session discusses how IBM Cloud Paks are designed with performance and scale in mind. This session also reviews best practices and performance characteristics of various enterprise applications deployed on both public, private and hybrid clouds through IBM Cloud Paks