text
stringlengths 161
10.8k
|
---|
# Infosys Whitepaper
Title: Achieve complete automation with artificial intelligence and machine learning
Author: Infosys Limited
Format: PDF 1.6
---
Page: 1 / 8
---
WHITE PAPER ACHIEVE COMPLETE AUTOMATION WITH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING {{ img-description: engineer scribbling code on computer, in the style of acidic and luminous colors, bokeh panorama, #vfxfriday, group material, farm security administration aesthetics, dark colors, bugcore (floating left in verticle rectanlge) }} Abstract As agile models become more prominent in software development, testing teams must shift from slow manual testing to automated validation models that accelerate the time to market. Currently, automation test suites are valid only for some releases, placing greater pressure on testing teams to revamp test suites, so they can keep pace with frequent change requests. To address these challenges, artificial intelligence and machine learning (AI/ML) are emerging as viable alternatives to traditional automation test suites. This paper examines the existing challenges of traditional testing automation. It also discusses five use-cases and solutions to explain how AI/ML can resolve these challenges while providing complete and intelligent automation with little or no human intervention, enabling testing teams to become truly agile.<|endoftext|>
---
Page: 2 / 8
---
Industry reports reveal that many enterprise initiatives aiming to completely automate quality assurance (QA) fail due to various reasons resulting in low motivation to adopt automation. It is, in fact, the challenges involved in automating QA that have prevented its evolution into a complete automation model. Despite the challenges, automation continues to be a popular initiative in today’s digital world. Testing communities agree that a majority of validation processes are repetitive. While traditional automation typically checks whether things work as they are supposed to, the advent of new technologies like artificial intelligence and machine learning (AI/ML) can support the evolution of QA into a completely automated model that requires minimal or no human intervention.<|endoftext|>Pain points of Complete Automation Introduction Let us look at the most pertinent problems that lead to low automation statistics: • Frequent requirement changes – While most applications are fluid and revised constantly, the corresponding automation test suite is not. Keeping up with changing requirements manually impedes complete automation. Moreover, maintaining automated test suites becomes increasingly complicated over time, particularly if there are frequent changes in the application under test (AUT) • Mere scripting is not automation – Testing teams must evolve beyond traditional test automation that involves frequent manual script-writing.<|endoftext|>• Inability to utilize reusable assets – It is possible to identify reusable components only after a few iterations of test release cycles. However, modularizing these in a manner that can be reused everywhere is a grueling task.<|endoftext|>• Talent scarcity – Finding software development engineers in test (SDETs) with the right mix of technical skills and QA mindset is a significant challenge QA teams today are looking for alternatives to the current slow and manual process of creating test scripts using existing methodologies. It is evident that intelligent automation (automation leveraging AI/ML) is the need of the hour.<|endoftext|>{{ img-description: an audio engineer working in his studio, in the style of thomas cole, uhd image, aaron douglas, jcore, lightbox, justin bua, free-associative }} External Document © 2020 Infosys Limited
---
Page: 3 / 8
---
A. Automation test suite creation How can enterprises achieve complete automation in testing? Use cases and solutions for intelligent test automation The key to achieving complete automation lies in using AI/ML as an automation lever instead of relegating it to scripting. Optimizing manual test cases using AI/ML is a good start. Helping the application self- learn and identify test suites with reusable Use case 1: Optimizing a manual test suite Testing teams typically have a large set of manual test cases for regression testing, which are written by many people over a period of time. Consequently, this leads to overlapping cases. This increases the burden on automation experts when creating the automation test suite. Moreover, as the test case suite grows larger, it becomes difficult to find unique test cases, leading to increased execution effort and cost.<|endoftext|>Solution 1: Use a clustering approach to reduce effort and duplication A clustering approach can be used to group similar manual test cases. This helps teams easily recognize identical test cases, thereby reducing the size of the regression suite without the risk of missed coverage. During automation, only the most optimized test cases are considered, with significant effort reduction and eliminating duplicates.<|endoftext|>Use case 2: Converting manual test cases into automated test scripts Test cases are recorded or written manually in different formats based on the software test lifecycle (STLC) model, which can be either agile or waterfall. Sometimes, testers can record audio test cases instead of typing those out. They also use browser- based recorders to capture screen actions while testing.<|endoftext|>Solution 2: Use natural language processing (NLP) In the above use case, the execution steps and scenarios are clearly defined, assets can be more advanced utilities for automated test suite creation.<|endoftext|>Leveraging AI in test suite automation falls into two main categories – ‘automation test suite creation using various inputs’ after which the tester interprets the test cases, designs an automation framework and writes automation scripts. This entire process consumes an enormous amount of time and effort. With natural language processing (NLP) and pattern identification, manual test cases can be transformed into ready-to-execute automation scripts and, furthermore, reusable business process components can be easily identified. This occurs in three simple steps: • Read – Using NLP to convert text into the automation suite. • Review – Reviewing the automation suite generated.<|endoftext|>• Reuse – Using partially supervised ML techniques and pattern discovery algorithms to identify and modularize reusable components that can be plugged in anywhere, anytime and for any relevant scenario.<|endoftext|>and ‘automation test suite repository maintenance’. The following section discusses various use cases for intelligent automation solutions under these two categories which address the challenges of end-to-end automation.<|endoftext|>All these steps can be implemented in a tool-agnostic manner until the penultimate stage. Testers can review the steps and add data and verification points that are learnt by the system. Eventually, these steps and verification points are used to generate automation scripts for the tool chosen by the test engineer (Selenium, UFT or Protractor) only at the final step. In this use case, AI/ML along with a tool- agnostic framework helps automatically identify tool-agnostic automation steps and reusable business process components (BPCs). The automation test suite thus created is well-structured with easy maintainability, reusability and traceability for all components. The solution slashes the planning and scripting effort when compared to traditional mechanisms.<|endoftext|>External Document © 2020 Infosys Limited {{ img-description: two people in a room and one of them has a virtual reality headset on, in the style of dark emerald and red, academic precision, textural surface treatments, barbizon school, functional design, ue5, candid }}
---
Page: 4 / 8
---
Use case 3: Achieving intelligent DevOps Software projects following the DevOps model usually have a continuous integration (CI) pipeline. This is often enabled with auto-deploy options, test data and API logs with their respective requests-response data or application logs from the DevOps environment. While unit tests are available by default, building integration test cases requires additional effort. Solution 3: Use AI/ML in DevOps automation By leveraging AI/ML, DevOps systems gain analytics capabilities (becoming ‘intelligent DevOps’) in addition to transforming manual cases to automated scripts.<|endoftext|>Fig 1: Achieving complete automation on a DevOps model with AI/ML Test Automation in DevOps – Quality at every step As shown in Fig 1, the key steps for automating a DevOps model are: • Create virtual service tests based on request-response data logs that can auto update/self-heal based on changes in the AUT.<|endoftext|>• Continuously use diagnostic analytics to mine massive data generated to proactively identify failures in infrastructure/code and suggest recovery techniques.<|endoftext|>• Leverage the ability to analyze report files and identify failure causes/ sequences or reusable business process components through pattern recognition.<|endoftext|>• Enable automated script generation in the tool-of- |
Continue # Infosys Whitepaper
choice using a tool-agnostic library.<|endoftext|>• Analyze past data and patterns to dynamically decide what tests must be run for different teams and products for subsequent application builds.<|endoftext|>• Correlate production log data with past code change data to determine the risk levels of failure in different application modules, thereby optimizing the DevOps process.<|endoftext|>External Document © 2020 Infosys Limited {{ img-description: man working with two computers to code a software, in the style of dark brown and light azure, soft edges and blurred details, handheld, poured, studyplace, furaffinity, coded patterns }}
---
Page: 5 / 8
---
Fig 2: How reinforcement algorithms in testing identify critical flows in an application B. Automation test suite maintenance Use case 4: Identifying the most critical paths in an AUT When faced with short regression cycles or ad hoc testing requests, QA teams are expected to cover only the most important scenarios. They rely on system knowledge or instinct to identify critical scenarios, which is neither logical nor prudent from a test coverage perspective. Thus, the inability to logically and accurately identify important test scenarios from the automation test suite is a major challenge, especially when short timelines are involved.<|endoftext|>Solution 4: Use deep learning to determine critical application flows If the AUT is ready for validation (even when no test cases are available in any form), a tester can use a reinforcement learning-based system to identify the critical paths in an AUT. External Document © 2020 Infosys Limited {{ img-description: person and woman pointing on computers while using software programs, in the style of code-based creations, object portraiture specialist, light amber and navy, iso 200, flowing brushwork, embroidery, polished craftsmanship (bottom full width) }}
---
Page: 6 / 8
---
{{ img-description: person and woman pointing on computers while using software programs, in the style of code-based creations, object portraiture specialist, light amber and navy, iso 200, flowing brushwork, embroidery, polished craftsmanship }} Use case 5: Reworking the test regression suite due to frequent changes in AUT If a testing team conducts only selective updates and does not update the automation test suite completely, then the whole regression suite becomes unwieldly. An AI-based solution to maintain the automation and regression test suite is useful in the face of ambiguous requirements, frequent changes in AUTs The five use cases and solutions discussed above can be readily implemented to immediately enhance an enterprise’s test suite automation process, no matter their stage of test maturity.<|endoftext|>and short testing cycles, all of which leave little scope for timely test suite updates.<|endoftext|>Solution 5: Deploy a self-healing solution for easier test suite maintenance Maintaining the automation test suite to keep up with changing requirements, releases and application modifications requires substantial effort and time. A self-healing/self-adjusting automation test suite maintenance solution follows a series of steps to address this challenge. These steps are identifying changes between releases in an AUT, assessing impact, automatically updating test scripts, and publishing regular reports. As shown in Fig 3, such a solution can identify changes in an AUT for the current release, pinpoint the impacted test scripts and recommend changes to be implemented in the automation test suite.<|endoftext|>AUTtest suite External Document © 2020 Infosys Limited
---
Page: 7 / 8
---
Conclusion Complete automation, i.e., an end-to-end test automation solution that requires minimal or no human intervention, is the goal of QA organizations. To do this, QA teams should stop viewing automation test suites as static entities and start considering these as dynamic ones with a constant influx of changes and design solutions. New technologies like AI/ML can help QA organizations adopt end- to-end automation models. For instance, AI can drive core testing whether this involves maintaining test scripts, creating automation test suites, optimizing test cases, or converting test cases to automated ones. AI can also help identify components for reusability and self-healing when required, thereby slashing cost, time and effort. As agile and DevOps become a mandate for software development, QA teams must move beyond manual testing and traditional automation strategies towards AI/ML-based testing in order to proactively improve software quality and support self-healing test automation.<|endoftext|>External Document © 2020 Infosys Limited {{ img-description: group of men and women in an office, raising their hands, in the style of spatial play, mix of masculine and feminine elements, smilecore, object-oriented, strong emotional impact, working-class empathy, curved mirrors }}
---
Page: 8 / 8
---
About the
Authors © 2020 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected Suman Boddu, Senior Technical Manager, Infosys Akanksha Rajendra Singh, Consultant, Infosys
***
|
# Infosys Whitepaper
Title: Enabling QA through Anaplan Model testing
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
WHITE PAPER {{ img-description: a blue colored image of a large data room, in the style of systems art, richly layered, colorized, contrasting (floating left ) }} ENABLING QA THROUGH ANAPLAN MODEL TESTING Abstract Anaplan is a cloud-based platform that can create various business models to meet different organizational planning needs. However, the test strategy for Anaplan varies depending on the application platform, cross-track dependencies and the type of testing. This white paper examines the key best-practices that will help organizations benefit from seamless planning through successful Anaplan testing.<|endoftext|>- Mangala Jagadish Rao - Harshada Nayan Tendulkar
---
Page: 2 / 8
---
{{ img-description: business man and technology icons scribbled on paper, in the style of light indigo and indigo, printed matter, interactive experiences, polished concrete, high-angle, caffenol developing, website (fade in right background) }} External Document © 2018 Infosys Limited What is Anaplan? Anaplan is a cloud-based operational planning and business performance platform that allows organizations to analyze, model, plan, forecast, and report on business performance. Once an enterprise customer uploads data into Anaplan cloud, business users can instantly organize and analyze disparate sets of enterprise data across different business areas such as finance, human resources, sales, forecasting, etc. The Anaplan platform provides users with a familiar Excel-style functionality that they can use to make data-driven decisions, which otherwise would require a data expert. Anaplan also includes modules for workforce planning, quota planning, commission calculation, project planning, demand planning, budgeting, forecasting, financial consolidation, and profitability modelling.<|endoftext|>
---
Page: 3 / 8
---
External Document © 2018 Infosys Limited External Document © 2018 Infosys Limited 7 best-practices for efficient Anaplan testing 1. Understand the domain As Anaplan is a platform used for enterprise-level sales planning across various business functions, its actual users will be organizational-level planners who have an in-depth understanding of their domains. Thus, to certify the quality of the Anaplan model, QA personnel must adopt the perspective of end-users who may Fig 2: Representation of Anaplan data integration be heads of sales compensation or sales operations departments, sales managers, sales agents, etc.<|endoftext|>2. Track Anaplan data entry points One of the features that makes Anaplan a popular choice is its provision of a wide range of in-built and third-party data integration points, which can be used to easily load disparate data sources into a single model. For most business users, data resides across many granular levels and it cannot be handled reliably with traditional Excel spreadsheets. Anaplan offers a scalable option that replaces Excel spreadsheets with a cloud-based platform to extract, load and transform data at any granular level from different complex systems while ensuring data integrity.<|endoftext|>It is essential for QA teams to understand the various up-stream systems from where data gets extracted, transformed and loaded into the Anaplan models. Such data also needs to be checked for integrity.<|endoftext|> {{ img-description: a person in a dark white shirt uses their phone to see an article about a data center on a laptop, in the style of light sky-blue and gray, historical illustrations, datamosh, light green and gray, streamlined forms, uhd image, functionality emphasis }}
---
Page: 4 / 8
---
External Document © 2018 Infosys Limited 3. Ensure quality test data management The quality of test data is a deciding factor for testing coverage and completeness. Hence, the right combination of test data for QA will optimize testing effort and cost. Since Anaplan models cater to the financial, sales, marketing, and forecasting domains of an organization, it is essential to verify the accuracy of the underlying data. Failure to ensure this could result in steep losses amounting to millions of dollars for the organization. Thus, it is recommended that the QA teams dedicate a complete sprint/cycle to test the accuracy of data being ingested by the models.<|endoftext|>The optimal test data management strategy for testing data in an Anaplan model involves two steps. These are: • Reconciling data between the database and data hub – Data from the source database or data files that is received from the business teams of upstream production systems should be reconciled with the data hub. This will ensure that the data from the source is loaded correctly into the hub. In cases of hierarchical data, it is important to verify that data is accurately rolled up and that periodic refresh jobs are validated to ensure that only the latest data is sent to the hub according to the schedule. • Loading correct data from the hub into the model – Once the correct data is loaded into the hub, testing moves on to validate whether the correct data is loaded from the hub to the actual Anaplan model. This will ensure that the right modules are referenced from the hub in order to select the right data set. It also helps validate the formulas used on the hub data to generate derived data. For every model that is being tested, it is important to first identify the core hierarchy list that forms the crux of the model and ensure that the data is validated across every level of the hierarchy, in addition to validating the roll-up of numbers through the hierarchy or cascade of numbers down the hierarchy as needed.<|endoftext|>{{ img-description: people standing near computers with clouds outlined, in the style of commercial imagery, emphasis on the process, light teal and black, high-key lighting, uhd image, light orange and white (bottom) }} External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
External Document © 2018 Infosys Limited 4. Validate the cornerstones It is a good practice to optimize test quality to cover cornerstone business scenarios that may be missed earlier. Some recommendations for testing Anaplan models are listed below: • Monitor the periodic data refresh schedules for different list hierarchies used in the model and validate that the data is refreshed on time with the latest production hierarchy • As every model may have different user roles with selective access to dashboards, ensure that functional test cases with the correctly configured user roles are included. This upholds the security of the model since some dashboards or hierarchical data should be made visible only to certain pre-defined user roles or access levels • The Anaplan model involves many data fields, some of which are derived from others based on one or more business conditions. Thus, it is advisable to include test scenarios around conditions such as testing warning messages that should be displayed, or testing whether cells that need to be conditionally formatted based on user inputs that either pass or fail the business conditions 5. Use automation tools for faster validation IT methodologies are evolving from waterfall to agile to DevOps models, leading to higher releases per year, shorter implementation cycle times and faster time-to-market. Thus, the test strategy of QA teams too should evolve from the waterfall model. Incorporating automation in the test strategy helps keep pace with shorter cycle times without compromising on the test quality. Anaplan implementation programs leverage agile and possess a wide range of testing requirements for data, integration and end-to-end testing, thereby ensuring that testing is completed on time. However, there are times when it becomes challenging to deliver maximum test coverage within each sprint because testing Anaplan involves testing each scenario with multiple datasets. Thus, it is useful to explore options for automating Anaplan model testing to minimize delays caused during sprint testing and ensure timely test delivery. Some of these options include using simple Excel-based formula worksheets and Excel macros along with open source automation tools such as Selenium integrated with Jenkins. This will help generate automated scripts that can be run periodically to validate certain functionalities in the Anaplan model for multiple datasets. These options are further explained below: Reusable Excel worksheets – This involves a one-time activity to recreate the dashboard forms into simple tables in Excel worksheets. The data fields in Anaplan can be classified into 3 types: • Fields that require user input • Fields where data is populated from various data sources within Anaplan or other systems • Fields where data gets derived based on defined calculation logic. Here, the formula used |
Continue # Infosys Whitepaper
to derive the data value is embedded into the Excel cell such that the derived value gets automatically calculated after entering data in the first two kinds of fields Using such worksheets accelerates test validation and promotes reusability of the same Excel sheet to validate calculation accuracy for multiple data sets, which is important to maintain the test quality : • Excel macros – To test the relevant formula or calculation logic, replicate the Anaplan dashboard using excel macros. This macro can be reused for multiple data sets, thereby accelerating and enhancing test coverage • Open source tools – Open source tools like Selenium can be used to create automation scripts either for a specific functionality within the model or for a specific dashboard. However, using Selenium for Anaplan automation comes with certain challenges such as: › Automating the end-to-end scenario may not be feasible since Anaplan requires switching between multiple user roles to complete the end-to-end flow › The Anaplan application undergoes frequent changes from the Anaplan platform while changes in the model build require constant changes in the scripts › Some data validation in Anaplan may require referencing other data sets and applying calculation logic, which may make the automation code very complex, causing delays in running the script 6. Ensure thorough data validation Anaplan can provision a secure platform to perform strategic planning across various domains. It provides flexibility and consistency when handling complex information from distinct data sources from various departments within the same organization. Identifying the correct underlying data is crucial for successful quality assurance of business processes using the Anaplan model. There are two key approaches when testing data in Anaplan. These include: • User access level – Business process requirements in some Anaplan models allow only end-users with certain access levels to view data and use it for planning. For instance, a multi-region sales planning model will include sales planners from different sales regions as end users. However, users should be allowed to only view the sales, revenue and other KPIs pertaining to their region as it would be a security breach to disclose the KPIs of other sales regions • Accuracy of data integrated from various systems – When the data being tested pertains to dollar amounts, for example, sales revenue, it is critical to have a thorough reconciliation of data against the source because a variation of a few dollars could lead to larger inaccuracies or discrepancies when the same data is rolled up the hierarchy or used to calculate another data field Since most Anaplan models contain business-critical numbers for financial reporting, it is important to run thorough tests to ensure accuracy.<|endoftext|>
---
Page: 6 / 8
---
External Document © 2018 Infosys Limited {{ img-description: someone holds a tablet with cloud icons on it, in the style of streamlined forms, light sky-blue and black, daniel garber, emphasis on detail, use of common materials, light green and azure, traditional (top 50% height) }} 7. Integration testing Integration testing should be an integral part of the testing strategy for any Anaplan application. Typically, there are multiple integration points that may have to be tested in an Anaplan implementation program owing to: • Different data integration points – Disparate data sets are imported into the data hub and then into the Anaplan models using different kinds of integration options such as flat files, Anaplan connect, Informatica, in-built data import, etc.<|endoftext|>• Different Anaplan models – There may be more than one Anaplan model being implemented for a medium to large- scale organization for different kinds of planning. These need to be integrated with each other for smooth data flow. For instance, the output of a model built exclusively for sales forecasting may be an important parameter for another model that deals with sales planning across an organization’s regional sales territories. Thus, besides testing the integration points between these models, it is advisable to have dedicated end-to-end cycle/sprint testing with scenarios across all these models and the integrated systems • Different data sets – The periodic refresh of data sets used across Anaplan models happens through Informatica, manual refresh, tidal jobs, etc. QA teams should understand how each data set is refreshed, identify the relevant job names and test these to ensure that the latest active hierarchy is periodically refreshed and loaded into the model. This will eliminate inaccuracies arising from data redundancies owing to inactivation or changes in the data structure in the upstream systems Anaplan implementation programs can be either standalone or inter- linked models. Irrespective of the type of implementation, an approach that follows the 7 best practices outlined in this paper will help QA teams optimize their strategy for Anaplan test projects.<|endoftext|>
---
Page: 7 / 8
---
External Document © 2018 Infosys Limited Conclusion Anaplan’s cloud-based, enterprise-wide and connected platform can help global organizations improve their planning processes across various business sectors. The Anaplan model is a simple, integrated solution that enables informed decision- making along with accelerated and effective planning. The strategic approach is one that leverages domain knowledge, test data management, automation, data validation, and integration testing, to name a few. The Infosys 7-step approach to effective Anaplan testing is based on our extensive experience in implementing Anaplan programs. It helps testing teams benefit from the best strategy for QA across various business functions, thereby ensuring smooth business operations.<|endoftext|>External Document © 2018 Infosys Limited {{ img-description: two businessmen in business suits looking at a tablet device, in the style of light gray and teal, cultural documentation, orange and indigo, iso 200, george biddle, soft-focus, dotted (backgound) }}
---
Page: 8 / 8
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected References https://www.anaplan.com
***
|
# Infosys Whitepaper
Title: Self-service Testing - The antidote for stressed testing teams
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
VIEW POINT SELF-SERVICE TESTING: THE ANTIDOTE FOR STRESSED TESTING TEAMS Abstract Testing is a key element in the software development lifecycle that ensures the delivery of quality products. As it has matured over the years in terms of processes and tools, a recent trend from clients is the need for ‘self-service testing.’ This point of view provides insights into client expectations and the manner in which testing is being transformed from ‘process and tools’ to ‘self-service’ mode.<|endoftext|>Today, organizations and testing partners prefer one-time investments to build self-service platforms, and expect them to be operational throughout the IT journey at the lowest possible maintenance cost.<|endoftext|>{{ img-description : people looking at graphs on their laptops, in the style of soft edges and blurred details, silver and blue, use of paper, iso 200, mottled, hyacinthe rigaud, dotted }}
---
Page: 2 / 8
---
{{ img-description : three people who are focusing on a laptop, while two women look on, in the style of smilecore, expert draftsmanship, sky-blue and beige, hispanicore, progressive academia, contact printing, international style }} Introduction New testing processes and tools are developed continuously to improve the quality of software. Today, the IT industry is gaining momentum in agile deliveries and development and operations (DevOps), which is creating new possibilities by integrating development, test, and operations teams. To keep pace with these changes, testing processes and tools need transformation such that testing platforms can be accessible to all stakeholders and made simple. External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
Factors driving self-service testing The diagram below highlights the challenges, opportunities, industry trends, and testing tools that are key to self-service testing.Challenges Organizations view testing as an integrated activity of product development and they tend to shorten the entire development and deployment duration. This puts pressure on testing teams to work in parallel with design, development, and deployment. At the same time, constant ’requirement changes’ bring significant amount of risk and rework before products go live.<|endoftext|>Opportunities Agile methodologies and DevOps drive synergies across business, development, test, and deployment teams. They provide good opportunities to enhance testing processes ensuring that there is no information gap and reduced lead times. Open source tools and technologies are other opportunities providing more options to build automation frameworks.<|endoftext|>Industry trends Agile software development Agile software development refers to a group of software development method- ologies based on iterative development, where requirements and solutions evolve through collaboration between self-organ- izing, cross-functional teams. Scrum is the most popular agile methodology.<|endoftext|>DevOps DevOps involves coordinating software development, technology operations, and quality assurance to make these three, sometimes disparate entities, work together seamlessly. It can streamline business processes and add value by eliminating redundancies.<|endoftext|>Testing tools JavaScript, selenium combined with vendor tools are part of the “12 Test Automation Trends for 2016” published in https://www.joecolantonio.com/ that are driving the test automation and will continue to do so.<|endoftext|>External Document © 2018 Infosys Limited {{ img-description : a person typing on a laptop with graphs, in the style of free-associative, tenwave, precisionist style, 500–1000 ce, silver, associated press photo, simple and elegant style (bottom right) }}
---
Page: 4 / 8
---
Self-service testing platform Testing challenges, opportunities, and industry trends explained above are encouraging test practitioners to find innovation in automation. As a result, the industry is headed towards building a platform comprising of testing tools, custom frameworks, scripts to connect various applications, and environments.<|endoftext|>The below diagram illustrates the concept of self-service testing.<|endoftext|>Common platform for unit, system integration, and acceptance testing The platform will provide an interface for developers, business, and test teams to provide input data. The input data can be steps or controls that navigate through the user interface (UI) to complete transactions, simple actions, and data for a Web service call, or a mapping sheet to verify the transformation. The core part of the platform comprises of an engine with various automation tools, custom scripts based on the projects needs Selenium / Java frame work to expose them through UI (or) as API services. This can be further integrated with mechanisms like Jenkins and Maven to automate code deployment and invoke automated test scripts.<|endoftext|>It comes with an additional environment configuration panel to perform testing in DevOps, quality assurance (QA) and AT environments. A user-friendly dashboard is provided to verify the status and results of the requests raised.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
Benefits Self-service testing brings the following benefits in software delivery: Ready to use platform: Minimal technical knowledge is required. Business knowledge is sufficient. Simple input to perform the business transactions, data transformation, and validation.<|endoftext|>Single framework to cater to all testing needs: Self-service testing platform can be used, reused by business, development team, and for post Go-Live testing as well. Different environments can be configured and selected during the execution.<|endoftext|>Workflow automation: Workflow automation to execute batch jobs, perform the outcome validation, and create reports. Continuous integration: Continuous integration and development to invoke the validation process as soon as the code is checked-in in an environment.<|endoftext|>External Document © 2018 Infosys Limited {{ img-description : couple in business suits talking with each other, in the style of light magenta and light amber, innovating techniques, smilecore, blurry details, august friedrich schenck, nul group, optical (pin background).<|endoftext|>}}
---
Page: 6 / 8
---
Custom SOA Testing Framework – User-friendly excel to write down the actions as test steps to form the central test case repository along with a data dictionary sheet to key-in the input data. Check box option to execute the selected test cases only.<|endoftext|>Success Stories A self-service testing tool implemented for major retailers: A Selenium framework was used to integrate reports, download data, extract data from the database, and perform validation. The navigation steps in the predefined reports were saved as reused test scripts.<|endoftext|>External Document © 2018 Infosys Limited {{ img-description : a money sitting in a shopping cart next to a keyboard, in the style of primitivist frenzy, website, barbizon school, combining natural and man-made elements, digital distortion, hustlewave, yankeecore (Pin Background) }}
---
Page: 7 / 8
---
Conclusion As various stakeholders are working together to deliver business value within a reasonable time, this single, common testing platform becomes essential to cater to all phases of testing. This should prevent the duplication of the various types of testing and its phases. A self-service testing platform will provide flexibility to adapt to changes and enhancements and it will fulfill the testing needs across the software lifecycles while maintaining the uniqueness of testing.<|endoftext|>{{ img-description : four business people posing together in a office, in the style of light yellow and light red, liquid emulsion printing, light gray and light beige, light teal and orange, back button focus, balanced asymmetry, dark white and light navy (pin background) }} About the
Author References: Sri Rama Krishnamurthi Sri Rama Krishnamurthi is a Senior Project Manager with Infosys, having 16 years of software testing experience in various domains like geographic information system (GIS), finance, retail, and product testing. He has been practicing business intelligence (BI) testing for more than nine years now. Test data management (TDM) and master data management (MDM) testing are other areas of his expertise.<|endoftext|>• https://saucelabs.com/resources/webinars/test-automation-trends-for-2016-and-beyond • https://www.cprime.com/resources/what-is-agile-what-is-scrum/ External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, |
Continue # Infosys Whitepaper
product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Assuring the digital utilities transformation
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
PERSPECTIVE ASSURING THE DIGITAL UTILITIES TRANSFORMATION Gaurav Kalia Client Solution Manager
---
Page: 2 / 8
---
With a multitude of industries embracing the digital revolution, the utilities industry is also rapidly moving towards this transformation. A few key predictions that give an insight into which direction the utilities industry is heading towards in the coming years are listed below: • By 2018, 70 percent of the utilities industry will launch major digital transformation initiatives that will address at least one of these three areas: omni-experience, operating model, or information • Data and analytics will play a key role to drive greater results with energy efficiency programs 1. Latest trends in utilities • By 2019, 75 percent of utilities will deploy a comprehensive, risk-based cyber security strategy, representing a maturation from a compliance focus to security focus • The utility IT services spending is expected to grow at a compound annual growth rate (CAGR) of 5.9 percent from 2014 to 2018 • US$65 million will be spent by utilities on gamified applications by 2016 to engage consumers • In addition, utilities will be spending US$57.6 billion on smart grid as-a- service from 2014 to 2023 • About 624 million customers worldwide will use social media to engage with utilities by 2020 • 75 percent of utilities will rely on managed services and industry cloud by 2019, to predict asset failures or recommend solutions; however, they will retain control of asset optimization • 50 percent of the utilities in 2019 will spend five percent or more of their capital expenditure (CapEx) on operational technologies and the Internet of Things (IoT) to optimize distributed energy resources, field services, and asset operations External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
• To compete in redesigned markets and support new business models, 45 percent of utilities will invest in a new customer experience solution and 20 percent in a new billing system by 2017 • Forced by extreme weather events in 2017, 75 percent of utilities will make new IT investments to predict outages, reduce their duration by 5–10 percent, and improve customer communications • Internet sales is rising by 20 percent. Providing reliable service at peak loads is inevitable for businesses • Over 15 percent of the Internet users worldwide are physically challenged, which brings in the need for more interactive websites • According to the International Data Corporation’s (IDC) quarterly mobile phone tracker report, vendors have shipped 472 million smartphones in 2011 compared to about 305 million units shipped in 2010. It touched 982 million by the end of 2015. On demand access to the digital world means that this segment of customers have very different expectations • They learn through collaboration and networks – the average Facebook user spends 55 minutes on the site daily • They expect options and make decisions based on peer recommendations – 78 percent of consumers say they trust peer recommendations compared to the 14 percent who trust advertisements • They constantly give their opinions and view products, services, and brands online. There are over 1,500 blog posts every 60 seconds and 34 percent of the bloggers post opinions about different products and brands External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
Obviously, from the above trends, we can understand why digital transformations have become critical and why the IT landscape must adapt accordingly: • To move with the technological advancements • To sustain a competitive environment • For an enhanced customer service experience • Better business progression The key business drivers that are pushing the utility market towards digital transformation are: Competitive environment We understand from the trends that technology is advancing and there is an implicit need to use new-age and innovative knowledge. For example: • Electric utility companies are using smart meters to enable two-way communication to reduce human intervention (removal of call centers) • Water and wastewater utility companies are looking towards 2. What do the above trends suggest? deploying geospatial information systems (GIS) to increase efficiency during the sampling, routing, and analyzing phases of their supply chain • Implementation of business intelligence (BI) analytics reporting solutions to enable an enterprise to make positive decisions • Some other key trends seen are the adoption of smart grid technology, intelligent devices in the grid – machine-to-machine (M2M), home area network (HAN), smart home, EV, and cyber security Customer experience Digital transformation signifies an enhanced customer experience. Evidently, with technology rising constantly we see faster and more reliable websites with better user experiences. This is because of the popularity of e-commerce applications / packages across all the leading sectors. Moreover, we have the popularity and the potential of social media with which consumers are changing the way they buy utility products. This gives a direction to utility companies to add new value services and social networking tools to their websites. Regulatory obligations As utility companies comply with various regulatory obligations (environmental and non-environmental), there are new policies and standards that come up. Lack of tangible systems are driving clients to go for digital transformation. Operational efficiency Utility companies will be looking towards better asset management and seamlessly integrated systems so that real-time information of assets is available at all times. This is applicable for both energy and water utility companies. Lastly, since most of the utility companies are using older systems built on outdated platforms and technologies, the problems of lack of support and high maintenance costs can be addressed using newer technologies. This clearly explains why digital transformations have become the need of the hour for the utility sector as well.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
3. Key quality assurance (QA) focus areas in digital transformational programs After understanding future trends, the utility market must comprehend the urgency of digital transformation to keep up with the advancing developments. Let us understand the aspects that require special attention during QA in digital transformation programs.<|endoftext|>Challenges QA focus areas Agility With the digital revolution and constantly growing technology, we see a myriad of changes and the need to pilot newer and adaptable strategies. This is to cater to the needs of a fast-growing and agile atmosphere. The necessity is to encompass the entire testing life cycle – right from requirement analysis to reporting. A well-defined, agile QA strategy will be required during such digital transformational programs to be able to cope with changing requirements.<|endoftext|>Exploding data and reporting In our grandparents’s generation, inputs were manageable. However, in this century, there are terabytes of customer data to manage. With the very real possibility of further increase in this figure, it is an unsaid challenge to maintain and secure consumer databases. With a change in the infrastructure, we have to deal with data migration and more importantly, an impact-less data migration. Robust and proven data migration testing services will be required during such migration activities.<|endoftext|>Performance and security There is an increase in online transactions and as discussed above, the advent of big data and the multitude of interconnected applications and devices, will pose a challenge for application performance and data privacy. When a customer performs any kind of transaction over a web-based e-commerce site, he/she is thinking of convenience and most importantly, security. Obviously, the performance and speed of a website matter because nobody wants a slow site in these fast times. This brings in the need for special focus on performance and security testing of these applications.<|endoftext|>Mobility With the anytime-anywhere nature of today’s customer, it is significant that flexible aspects of testing are devised on different device configurations. Heeding the need for mobility, keep in mind that this is similar in terms of the challenges of performance and security generally seen in web-based applications. Mobility testing tools and solutions will be of great importance during such initiatives.<|endoftext|>Seamless customer experience The ultimate goal of a seamless customer experience for any company is to make business easy for its customers. It is about finding new ways to engage with the audience in the right way. It is necessary to understand that today’s customer is looking for real |
Continue # Infosys Whitepaper
-time and expert interaction, which is possible with interactive websites. Usability and functional validation will be key focus areas while testing a website that has new features, such as live chat, gamification, online surveys, etc. External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
Need for skilled resources The increase in challenges demands skilled testing staff who are innovative and experienced in their utilities domain with niche skills. The focus is on a continuous learning attitude and out-of-the-box thinking ability. Experienced QA resources with domain expertise will be required to support such large digital transformation programs.<|endoftext|>Limited budget Capital is the prime requirement after understanding the necessity of modernization. Clearly, we all want to gain more with less investment. Therefore the need of the hour is to enable test automation and innovative ideas to ensure cost effectiveness. Innovative test automation solutions will be the answer to this most important and silent question of capital.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
4. Conclusion Digital transformation is urgent and extremely significant for utility companies if they are to move with the fast-changing times and always be a step ahead of the competition. Nevertheless, while doing so, quality assurance (QA) is a major aspect and it is important to devise solutions to help the constantly changing environment. This can be achieved by modernizing the infrastructure and having a continuous improvement attitude.<|endoftext|>© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Best practices to ensure seamless cyber security testing
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 4
---
VIEW POINT BEST PRACTICES TO ENSURE SEAMLESS CYBER SECURITY TESTING Abstract In a post COVID-19 world, the need to become digitally-enabled is more pressing than ever before. Enterprises are accelerating digital strategies and omni-channel transformation projects. But while they expand their digital footprint to serve customers and gain competitive advantage, the number and extent of exposure to external threats also increases exponentially. This is due to the many moving parts in the technology stack such as cloud, big data, legacy modernization, and microservices. This paper looks at the security vulnerabilities in open systems interconnection (OSI) layers and explains the best practices for embedding cyber security testing seamlessly into organizations.<|endoftext|>
---
Page: 2 / 4
---
External Document © 2021 Infosys Limited Introduction Open systems interconnection (OSI) comprises many layers, each of which has its own services/protocols. These can be used by hackers and attackers to compromise the system through different types of attacks.<|endoftext|>Fig 1: Points of vulnerability across OSI layers For some OSI layers like Transport, Session, Presentation, and Application, some amount of exposure can be controlled using robust application-level security practices and cyber security testing. From a quality engineering perspective, it is important for testers to be involved in the digital security landscape. While there is no single approach to handle cyber security testing, the following five best practices can ensure application security by embedding cyber security testing seamlessly into organizations: 1. Defining and executing a digital tester’s role in the DevSecOps model 2. Understanding and implementing data security testing practices in non- production environments 3. Security in motion – Focusing on dynamic application security testing 4. Understanding the vulnerabilities in infrastructure security testing 5. Understanding roles and responsibilities for cloud security testing Best practice 1: Defining and executing a digital tester’s role in the DevSecOps model DevSecOps means dealing with security aspects as code (security as a code). It enables two aspects, namely, ‘secure code’ delivered ‘at speed’. Here is how security-as- a-code works: • Code is delivered in small chunks. Possible changes are submitted in advance to identify vulnerabilities • The application security team triggers scheduled scans in the build environment. Code checkout happens from SVN or GIT (version control systems) • Code is automatically pushed for scanning after applying UI and server- based pre-scan filters. Code is scanned for vulnerability • Results are pushed to the software security center database for verification • If there are no vulnerabilities, the code is pushed to quality assurance (QA) and production stages. If vulnerabilities are found, these are backlogged for resolution DevSecOps can be integrated to perform security tests on networks, digital applications and identity access management portals. The tests focus on how to break into the system and expose vulnerable areas.<|endoftext|>Physical Layer Data link Layer Network Layer Transport Layer Session Layer Presentation Layer Application Layer Transmission media, bit stream (signal) and binary transmission Ethernet, 802.11 protocol, LANs, Fiber optic, Frame protocol Networks, IP address, ICMP protocol, IPsec protocol, OSPF protocol TCP, UDP, SSL, TLS - protocols Establishing session communications Data representation, Encryption and Decryption File transfer protocol, simple mail transfer protocol, Domain Name System SQL injec�on, Cross-site scrip�ng so�ware a�ack (persistent and non-persistent), Cross-site request forgery, Cookie poisoning SSL a�acks, HTTP tunnel a�acks Session hijacking, sequence predic�on a�ack, Authen�ca�on a�ack Port scanning, ping flood and Distributed Denial-of- Service (DDoS) a�ack External a�acks such as packet sniffing, Internet Control Message Protocol (ICMP) flood a�ack, Denial of Service (DoS) a�ack at Dynamic Host Configura�on Layer (DHCP), MAC address spoofing etc.- These are primarily internal a�acks Data the�, Hardware the�, Physical destruc�on, Unauthorized access to hardware/connec�ons etc., OSI Layers Services/Protocols Types of attacks
---
Page: 3 / 4
---
External Document © 2021 Infosys Limited Best practice 2: Understanding and implementing data security testing practices in non- production environments With the advent of DevOps and digital transformation, there is a tremendous pressure to provision data quickly to meet development and QA needs. While provisioning data across the developing pipelines is one challenge, another is to ensure security and privacy of data in the non-production environment. There are several techniques to do this as discussed below: • Dynamic data masking, i.e., masking data on the fly and tying database security directly to the data using tools that have database permissions • Deterministic masking, i.e., using algorithm-based data masking of sensitive fields to ensure referential integrity across systems and databases • Synthetically generating test data without relying on the production footprint by ensuring referential integrity across systems and creating a self-service database • Automatic clean-up of the sample data, sample accounts and sample customers created Best practice 3: Security in motion – Focus on dynamic application security testing This test is performed while the application is in use. Its objective is to mimic hackers and break into the system. The focus is to: • Identify abuse scenarios by mapping security policies to application flows based on the top 10 security vulnerabilities for Open Web Application Security Project (OWASP) • Conduct threat modeling by decomposing applications, identifying threats and categorizing/rating threats • Perform a combination of automated testing and black-box security/ penetration testing to identify vulnerabilities Best practice 4: Understanding the vulnerabilities in infrastructure security testing There are infrastructure-level vulnerabilities that cannot be identified with UI testing. Hence, infrastructure-level exploits are created and executed, and reports are published. The following steps give insights to the operations team to minimize/eliminate vulnerabilities at the infrastructure layer: • Reconnaissance and network vulnerability assessment including host fingerprinting, port scanning and network mapping tools • Identification of services and OS details on hosts such as Domain Name System (DNS) and Dynamic Host Configuration Protocol (DHCP) • Manual scans using scripting engine and tool-based automated scans • Configuration reviews for firewalls, routers, etc.<|endoftext|>• Removal of false positives and validation of reported vulnerabilities Best practice 5: Understanding roles and responsibilities for cloud security testing With cloud transformation, cloud security is a shared responsibility. Cloud security testing must involve the following steps: • Define a security validation strategy based on the type of cloud service models: • For Software-as-a-Service (SaaS), the focus should be on risk-based security testing and security audits/ compliance • For Platform-as-a-Service (PaaS), the focus should be on database security and web/mobile/API penetration testing • For Infrastructure-as-a-Service (IaaS), the focus should be on infrastructure and network vulnerability assessment • Conduct Cloud Service Provider (CSP) service integration and cyber security testing. The focus is on identifying system vulnerabilities, CSP account hijacking, malicious insiders, identity/access management portal vulnerabilities, insecure APIs, shared technology vulnerabilities, advanced persistent threats, and data breaches • Review the CSP’s audit and perform compliance checks These best practices can help enterprises build and create secure applications right from the design stage. Infosys has a dedicated Cyber Security Testing Practice that provides trusted application development and maintenance frameworks, security testing automation, security testing planning, and consulting for emerging areas. It aims to integrate security into the code development lifecycle through test automation with immediate feedback to development and operations teams on security vulnerabilities. Our approach leverages several open-source and commercial tools for security testing instrumentation and automation.
---
Page: 4 / 4
---
© 2021 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names |
Continue # Infosys Whitepaper
and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected Conclusion The goal of cyber security testing is to anticipate and withstand attacks and recover quickly from security events. In the current pandemic scenario, it should also help companies adapt to short-term change. Infosys recommends the use of best practices for integrating cyber security testing seamlessly. These include building secure applications, ensuring proper privacy controls of data in rest and in motion, conducting automated penetration testing, and having clear security responsibilities identified with cloud service providers.<|endoftext|>About the
Authors Arun Kumar Mishra Senior Practice Engagement Manager, Infosys Sundaresasubramanian Gomathi Vallabhan Practice Engagement Manager, Infosys References 1. https://www.marketsandmarkets.com/Market-Reports/security-testing-market-150407261.html 2. https://www.infosys.com/services/validation-solutions/service-offerings/security-testing-validation-services.html
***
|
# Infosys Whitepaper
Title: Infosys Oracle Package | Independent Validation and Testing Services
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
WHITE PAPER ASSURING SUCCESS IN BLOCKCHAIN IMPLEMENTATIONS BY ENGINEERING QUALITY IN VALIDATION Arvind Sundarraman, Business Development Executive, Infosys Validation Solutions
---
Page: 2 / 8
---
Introduction to blockchain and smart contracts A blockchain is a decentralized and distributed database that holds a record of transactions linked to it in blocks which are added in a liner and chronological order. It provides a secure, transparent, and immutable source of record by its design that has technology and infinite possibilities to change and revolutionize the way transactions are managed in a digital world.<|endoftext|>Figure 1: Transaction linkage within blocks in a blockchain Figure 2: Peer-to- peer architecture of blockchain using smart contracts in public and private network The implementation of this technology is generally carried out in a decentralized peer-to-peer architecture with a shared ledger that is made available to the authorized participants in the private and public network. This ensures that the transactions are captured within the blocks of information which are continually updated and securely linked to chain, thereby ensuring visibility of the changes as well as providing a foolproof way of handling transactions mitigating the possibility of double spending or tampering. Smart contracts are protocols or rule sets embedded within the blockchain which are largely self-executing and enforce a contract condition. They ensure that the terms specified within the contract are executed on their own when the transactions fulfil the specified condition within the contract rules and made visible to everyone on the network without any intervention, thereby guaranteeing autonomy, safety, and trust. Smart contracts also ensure that the transactions are carried out instantaneously on the preconditions set within the contract being met.<|endoftext|>Smart contracts are an integral part of the blockchain and go hand in hand in ensuring the success of the distributed nature of the ledger that is the core of blockchain.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
Source : https://en.wikipedia.org/wiki/Blockchain_(database) Figure 3: Graph highlighting the rise in the adoption of blockchain in terms of transaction per day Blockchain and the digital ecosystem In today’s world, digital transactions are growing exponentially and the phenomenon is predicted to sustain and even outperform its climb in the next decade. Digital transactions are gaining popularity globally due to ease of use, improved security, and faster mechanism of managing transactions.<|endoftext|> Blockchain technology provides a secure way of handling transactions online and hence is of enormous relevance. The most widely known implementation of blockchain technology is in the financial sector and Bitcoin as a crypto currency payment system was the pioneering system developed on this technology.<|endoftext|>The value of blockchain technology though it is not limited to digital wallets and payments systems, its application in a wider context has gained more relevance in the recent times. Transactions through blockchain have also had an impressive surge similar to the growth in digital transactions. The graph below highlights the rise in usage of blockchain in terms of transactions. This also reflects the increase in adoption and usage of this technology across domains for various uses that initially was perceived to be mainly in the financial sector of payments and transactions.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
A number of possible use cases across various domains where blockchain technology can be applied are detailed below and this shows the potential that blockchain holds across various industries and segments.<|endoftext|>Financial services • Post-trade settlement, person-to-person (P2P) payments, trade finance, know your customer (KYC), global remittance, syndicated loans, asset leasing, gifting, and mortgage services Insurance • Automated claims processing, fraud detection, P2P insurance, travel insurance, Reinsurance, and KYC Healthcare • Patient records and automated claim processing through smart contracts Manufacturing – Supply chain • Tracking product origin, inventory management, smart contracts for multi- party agreements, and digitization of contracts / documents Retail • Retail supply chain, reward points tracking, blockchain based market place, and inventory management • Energy and utility • Energy trading and P2P power grid Media and entertainment • Anti-piracy / copy rights management, royalty management, and crowd funding of new content • Transportation • Rider passenger coordination, review authentication, and Bitcoin payment Communication • Billing systems and call detail record (CDR) • Roaming and network sharing access control • Provisioning and identity management • Mobile wallets and money Challenges in testing blockchain implementations Validating and verifying that an implementation of blockchain offers a number of challenges due to the inherent structure of technology as well as the distributed nature of the system.<|endoftext|>• Technology stack A primary factor that influences the required level of validation is dependent on whether the implementation is on a public platform like Etherum or Openchain or on a self-setup or customized platform that is purpose built for the needs of the organization. The latter needs more setup and effort in testing. Open source and popular platforms like Etherum have recommendations and guidance on the level of tests and have more mature methods where an in-house implementation needs a detailed test strategy frame based on the functionality that is customized or developed.<|endoftext|>• Test environment The availability and utilization of a test platform that provides a replica of the implementation is also a need and if it is not available, considerable time needs to be spent on setting up or spawning from the real implementation.<|endoftext|> Blockchain implementations like BitCoin (testnet) and Etherum (modern) provide test instances that are distinct and separate from the original while providing means to test advanced transaction functionalities in a like for like mode.<|endoftext|>• Testing for integration An implementation of blockchain within a stack of a company is expected to have interfaces with other applications. Understanding the means of interface and ensuring the consistency with the existing means is key to assure that there are no disconnects on launch. A detailed view of the touch points and the application programming interfaces (APIs) that act as points of communication need to be made available to the testing team in advance so that the appropriate interfaces can be exercised and tested during the validation phases.<|endoftext|>• Performance testing Major problems that could affect an implementation is in estimating and managing the level of transactions that are anticipated on the production systems. One of the key issues with the Bitcoin implementation has been with delays in processing transactions due to a surge in the usage. This has led to business withdrawing from accepting this form of crypto currency as well as partners leaving the consortium.<|endoftext|> The need within the chain for verification of the transactions by miners can complicate the time taken to process and confirm a transaction and hence during validation, a clearly chalked out strategy to handle this needs be outlined and applied.<|endoftext|>• Security Though the blockchain technology has inherent security features that has made the protocol popular and trusted, this also places a lot of importance on the intended areas where this is applied that are generally high-risk business domains with a potential for huge implications if security is compromised External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
Figure 4: Illustration of the management system built on blockchain technology Validation in real-world situation The construct of validating a real world implementation and ensuring that the testing verifies the intended use case is of paramount importance as this is more complex and requires specific focus in understanding the needs as well as drafting the coverage that is required within the test cycle.<|endoftext|>For a more detailed view of how the testing space spans on the ground, let us consider an example of an application of this technology with an assumed integration of the feature for a telecom operator who implements this application to manage user access and permissions and handover in roaming within the operations support systems (OSS) business support systems (BSS) stack of the operator. It is assumed that the communication service provider (CSP) and the partner CSPs providing service to the customer during the roaming have an agreement for implementing a common blockchain based platform as a solution to manage the transaction in each other’s network as well as communicating with the CDR of the subscribers. The implementation should ensure that the authentication of the user, authorization of the service, and billing systems are managed between the CSPs while the user moves from the home mobile network to the visited mobile network of the partner CSP.<|endoftext|>A depiction below shows the illustration of the |
Continue # Infosys Whitepaper
management system employed in this scenario that would be built on blockchain technology.<|endoftext|>The above model provides a secure medium of authentication of the user who is entering the partner network assuming that the identity of the user can be verified over a shared ledger system and signed by the respective base owners of the users. Similarly, the CDRs that relate to the roaming duration can be cleared over a joint system of clearing house that is built on blockchain technology. This would ensure instantaneous management of transactions and charges can be setup based on contracts to ensure that the information is relayed back to the customer instantaneously providing far more agility and trust in the management of the subscribers in other networks.<|endoftext|>With this system in design, let us look at the key activities that are needed to ensure that the implementation meets the desired objectives outlined.<|endoftext|>• System appreciation The early involvement of the testing team is extremely important since there should be a view of the applications within the stack that would be modeled on the new technology along with the affected system components. A detailed view of the impacted components needs to be included in the testing plan and a strategy put up in place for each of these components that are impacted. For example, in the above case, the billing systems as well as the user identity managed systems are impacted where the customer relationship management (CRM) system may not be directly affected by the change. Testing scope for the impacted components would be higher than the others where a functional workflow with regression test on the CRM systems could suffice. • Test design assurance In this phase, with an early system view, the team needs to put up a detailed level test strategy and assign a External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
traceability to the requirements. Specifically to blockchain testing, the key components that need to be checked in this phase include • Build a model of the structure of the blocks, the transactions, and the contracts that need to be tested • Outline a use case for each individual section and end points to be validated, i.e. if the transaction is to pass the user credentials to the visiting network for authentication, then detail the scenarios under which testing could be required.<|endoftext|>• Estimate non-functional requirements (NFRs) as well as security testing needs. In the above example, the number of CDRs logged in the clearing house over a month should give a fair idea of the number of transactions which are expected and hence performance tests can be considered at this level. For security testing, the APIs that are exposed need to be considered as well measuring the inherent level of security in the blockchain technology (private or public) • Test planning Test planning needs to consider the full test strategy as well as the other methodology to test and place to conduct the testing. For the above examples, a full test strategy should outline the availability or non-availability of a test environment or a testnet and if one is not available, a view of how a private one can be setup. Also a low level view of how testing is conducted along with the approach to it in each phase from unit till integration, functional tests needs to be finalized. It is recommended that the volume of tests follow a pyramid structure and there is a good amount of lower level tests (unit) to identify and correct the systems prior to integrating them with the rest of the systems in the stack. From the illustration below, it is assumed that an Etherum based blockchain solution has been implemented for managing user handling during roaming and there is an existing Amdocs billing system that it integrates with.<|endoftext|>Testing phase Developer / Unit testing System testing Billing integration testing Functional verifcation / UI tests 2500 Test-driven development (TDD) approach with a suitable framework Verify refection of CDRs from clearing house on the billing system (Amdocs) Automated tests for front end verifcation of bills logged to the customers. For E.g. Selenium scripts for web testing Verify contracts, blocks, and blockchain updates with auto triggered roaming use cases setup on contracts through scripts 1000 275 50 Volume of tests Test methodology and tools Table 1: Test phases with volumes of tests, methodologies, and tools • Use cases map The test plan needs to be verified and validated by business for inclusion of the user scenarios which map to detailed level test cases. A complete coverage can be ensured only when all the functional validation covers testing across the transactions, i.e., the user moving from the home- based network to the visited or the external network, the blocks, i.e., the CDRs that are transmitted between the network partners and the contracts i.e. the rule allowing the user roaming permissions on a partner network are tested in all the different scenarios that govern the functionality.<|endoftext|>• Test execution and result verification Execution can be ideally automated with scripting following a TDD approach for unit testing on a suitable framework and following a similar approach as outlined in the test plan. The results then need to be consolidated and verified back. A list of defects identified needs to be reported along with the test execution status and conformance. Focus in test execution should be on unit and system tests as a higher level of defects can be detected in the core blockchain adoption in these phases. The integration and functional level tests can then be carried out to verify the integration of the core with the existing framework. However, the rule set for the blockchain needs to be verified within the unit and system testing phases.<|endoftext|> A guidance on the test strategy across various test phases is provided below with a call-out on the key activities to be catered to manage within each phase.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
Figure 5: Test strategy across various test phases with a call-out on the key activities External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
Summary The key to successfully adopt a blockchain methodology rests in a well-balanced detailed approach to design and validation. While the mechanism of validating and verifying is largely similar to any other system, there is a necessity to focus on specific areas and evolving a test strategy in line with the implementation construct that is applied for the domain. Infosys Validation Solutions (IVS) as a market leader in quality assurance offers next-generation solutions and services and engages with more than 300 clients globally. With focussed offerings across various domains and segments and with key offering focussed in digital assurance space, Infosys Validation Solutions are committed to deliver high-quality products by adopting newer methodologies and solutions for partnering enterprises across the globe.<|endoftext|>© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Choosing the right automation tool
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
VIEW POINT CHOOSING THE RIGHT AUTOMATION TOOL AND FRAMEWORK IS CRITICAL TO PROJECT SUCCESS Harsh Bajaj, Technical Test Lead ECSIVS, Infosys
---
Page: 2 / 8
---
Organizations have become cognizant of the crucial role of testing in the software development life cycle and in delivering high quality software products. As the competition in the IT sector grows stiffer, the pressure to deliver larger number of high quality products with fewer resources in limited time is increasing in intensity.<|endoftext|>During development cycles software tests need to be repeated to ensure quality. Every time the source code is modified, test cases must be executed. All iterations in the software need to be tested on all browsers and all supported operating systems. Manual execution of test cases is not only a costly and time-consuming exercise, but it is also prone to error.<|endoftext|>Automation testing addresses these challenges presented by manual testing. Automation tests can be executed multiple times across iterations much faster than manual test cases, saving time as well as cost. Lengthy tests which are often skipped during manual test execution can be executed unattended on multiple machines with different configurations, thus increasing the test coverage. Automation testing helps find defects or issues which are often overlooked during manual testing or are impossible to detect manually – for example, spelling mistakes or hard coding in the application code. Automaton also boosts the confidence of the testing team by automating repetitive tasks and enabling the team to focus on challenging and high risk projects. Team members can improve their skill sets by learning new tools and technologies and pass on the gains to the organization. The time and effort spent on scientifically choosing a test automation product and framework can go a long way in ensuring successful test execution. Let us take a closer look at various factors involved in the selection process. Introduction External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
Identify the right automation tool Identification of the right automation tool is critical to ensure the success of the testing project. Detailed analysis must be conducted before selecting a tool. The effort put in the tool evaluation process enables successful execution of the project. The selection of the tool depends on various factors such as: • The application and its technology stack which is to be tested • Detailed testing requirements • Skill sets available in the organization • License cost of the tool There are various functional automation tools available in the market for automating web and desktop applications. Some of these are: Tool Description QTP(Quick Test Professional) / UFT (Unified Functional Testing) Selenium Watir (Web Application Testing in Ruby) Geb Powerful tool from HP to automate web and desktop applications Open source automation tool for automating web applications Open source family of Ruby libraries for automating web browsers Open source automation tool based on Groovy Comparison Matrix While analyzing various automation tools, a comparison of key parameters helps select the right tool for the specific requirements of the project. We have created a comparison chart of tools listed above based on the most important parameters for automation projects. Organizations can assign values to these parameters as per their automation requirements. The tool with the highest score can be considered for further investigation.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
Parameters Criteria UFT Selenium Watir GEB Ease of Adoption License cost QTP is HP licensed product available through single- seater floating or concurrent licenses Selenium is an open-source software and is free Watir is an open- source (BSD) family of Ruby libraries for automating web browsers and is free GEB is open source software based on Groovy and is free.<|endoftext|>Ease of support Dedicated HP support User and professional community support available Limited support on open source community Limited support on open source community Script creation time Less Much more More More Scripting language VB Script Java, CSharp, Python, Ruby, Php, Perl, JavaScript Ruby Groovy Object recognition Through Object Spy Selenium IDE, FireBug, FirePath OpenTwebst (web recorder) GEB IDE Learning time Less Much more Much more Much more Script execution speed High Low Low Low Framework In-built capability to build frameworks such as keyword- driven, data-driven and hybrid JUnit, NUnit, RSpec, Test::Unit, TestNG, unittest Ruby supported Frameworks - RSpec, Cucumber, Test::Unit Grails, Gradle, Maven Continuous integration Can be achieved through Jenkins Achieved through Jenkins Achieved using Ruby script Achieved using Grails, Gradle plug-in along with Jenkins Gradle plug-in Non browser-based app support Yes No No No Operating system support Windows 8/8.1/7/ XP/Vista (No other OS) Windows, MAC OS X, Linux, Solaris (OS support depends on web-driver availability) Windows 8.1, Linux 13.10, MAC OS X 10.9, Solaris 11.1 (need JSSH compiled) Windows XP/Vista/7, Linux, DOS (OS support depends on web-driver availability) Browser Support IE (version 6-11), Firefox (version 3-24), Chrome (up to version 24) Firefox, IE, Chrome, Opera, Safari Firefox, IE, Chrome, Opera, Safari Firefox, IE, Chrome, Opera, Safari Device Support Supports iOS, Android, Blackberry, and Windows Phone via licensed products such as PerfectoMobile and Experitest Two major mobile platforms iOS and Android Two major mobile platforms iOS and Android Driver available for iOS (iPhone and iPad) Ease of Scripting and Reporting Capabilities Tools Usage External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
What is a test automation framework? A test automation framework is a defined, extensible support structure within which the test automation suite is developed and implemented using the selected tool. It includes the physical structures used for test creation and implementation as well as the logical interaction between components such as: • Set of standards or coding guidelines, for example, guidelines to declare variables and assign them meaningful names • Well-organized directory structure • Location of the test data • Location of the Object Repository (OR) • Location of common functions • Location of environment related information • Methods of running test scripts and location of the display of test results A well-defined test automation framework helps us achieve higher reusability of test components, develop the scripts which are easily maintainable and obtain high quality test automation scripts. If the automation framework is implemented correctly it can be reused across projects resulting in savings on effort and better return on investment (ROI) from the automation projects. Let us discuss some key frameworks available in the industry today.<|endoftext|>After analyzing the pros and cons of these frameworks, most companies opt for the hybrid framework. This allows them to benefit from multiple best functional test automation frameworks available in the industry.<|endoftext|>Modular Framework Data-driven Framework Hybrid Framework • In the modular approach reusable code is encapsulated into modular functions in external libraries. These functions can then be called from multiple scripts as required. • This framework is well suited in situations where the application includes several reusable steps to be performed across test scripts.<|endoftext|>• In the modular approach reusable code is encapsulated into modular functions in external libraries. These functions can then be called from multiple scripts as required. • This framework is well suited in situations where the application includes several reusable steps to be performed across test scripts.<|endoftext|>• The hybrid model is a mix of the data-driven and modular frameworks • This framework mixes the best practices of different frameworks suitable to the automation need.<|endoftext|>Keyword-driven Framework • In the keyword-driven framework, a keyword is identified for every action that needs to be performed and the details of the keyword are given in a spreadsheet.<|endoftext|>• This framework is more useful for non-technical users to understand and maintain the test scripts.<|endoftext|>• Technical expertise is required to create a complex keyword library • Creating such a framework is a time-consuming task.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 6 |
Continue # Infosys Whitepaper
/ 8
---
Implementing Hybrid Framework with Selenium • Store the test data (any user input data) in an Excel file.<|endoftext|>• Store the environment related information (for example, QA, UAT, Regression) in a property file.<|endoftext|>• Store various objects in the application on which user action needs to be taken in object repository files.<|endoftext|>• Test suite contains the logic to verify acceptance criteria mentioned in the requirement.<|endoftext|>• Execute the script on various browsers as per the need.<|endoftext|>• Generate the reports capturing screenshots and pass/fail results. To get advanced reports in Selenium use any testing framework such as TestNG, JUnit.<|endoftext|>Now let us see how to implement a hybrid framework with Selenium as an automation tool. The key points in the implementation are: Test Data (Excel) Data Layer (Input test data, environment) Property File Object A Object B Object C Object Repository Test Scripts Test Suite ANT Builder Execute Reporting Module [Captured Screenshots, XML Based Logs and HTML Based Reports] Application under Test (AUT) External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
Implementing Hybrid Framework with QTP/ UFT • Store the test data (any user input data) in an Excel file.<|endoftext|>• Create a run manager sheet to drive the test execution. • Store the environment-related information in property file. • Store the objects in the application on which user actions are taken in object repository files. • Divide the test cases into modular functions, keeping common functions separately to be used across projects. • Include main script common functions, object repository and test data. Generate different types of reports as per the business requirements.<|endoftext|>Let us discuss how to implement a hybrid framework with QTP/ UFT as an automation tool. The key points in the implementation are: Test Data (Excel) Data Layer Environment data App 1 App 2 App 3 Object Repository Reusable functions Test Suite Test Suite Execute Execution Summary Application under Test (AUT) Reporting Module Test Script Results Error Log External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
Executing Proof of Concept for the Selected Tool The last phase of tool evaluation is proof of concept (PoC). The tool selected may satisfy your criteria conceptually, but it is advisable to test the tool using a few scenarios. Almost every tool vendor provides an evaluation version of their tool for a limited time period. The following steps need to be considered during the PoC: • Choose a few scenarios in such a way that they cover different objects and controls in the application.<|endoftext|>• Select the tool(s) based on a comparative study.<|endoftext|>• Automate the chosen scenarios using the selected tool(s).<|endoftext|>• Generate and analyze various reports.<|endoftext|>• Analyze the integration of the tool with other tools such as test management tool available, for example, QC (Quick Center) and with continuous integration tools such as Jenkins.<|endoftext|>While evaluating multiple tools, generate a score-card based on various parameters such as ease of scripting, integration, usage, reports generated and chose the tool with the maximum score. When the POC is completed, the team can be more confident about successfully automating the application using the selected tool.<|endoftext|>Conclusion The process of automation framework design and development requires detailed planning and effort. To achieve the desired benefits, the framework must be accurately designed and developed. Such a framework can then be used across projects in an organization and provides substantial ROI. When choosing an automation framework, it is crucial to ensure that it can easily accommodate the various automation testing technologies and changes in the system under test.<|endoftext|>One of the key factors contributing to the accomplishment of any test automation project is identifying the right automation tool. A detailed analysis in terms of ease of use, reporting and integration with various tools must be performed before selecting a tool. Though such selection processes call for focused effort and time, this investment is worth making because of the great impact it has on the success of the automation project.<|endoftext|>About the
Author Harsh Bajaj is a Technical Test Lead with Infosys Independent Validation Services Team. She has led various test automation projects in the telecom domain. She is proficient in various test management and automation tools. About the Reviewer Gautham Halambi is a Project Manager with Infosys and has more than 9 years of experience. He has worked on multiple telecom testing projects and has led and managed large manual and automation testing teams.<|endoftext|>References www.seleniumhq.org www.watir.com, http://www.gebish.org/ http://www.automationrepository.com/ http://en.wikipedia.org/wiki/Keyword-driven_testing © 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Need of a Test Maturity Model
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
PERSPECTIVE NEED FOR A COMPREHENSIVE TEST MATURITY MODEL - Reghunath Balaraman, Harish Krishnankutty Abstract Constant change and ever growing complexity of business has necessitated that IT organizations and specifically Test/QA organizations make thorough and periodic introspection of their processes and delivery capabilities. This is necessary to ensure that at all possible times the Test/QA organization, and its systems and processes, are relevant and available to support business needs. While there are multiple maturity models in the marketplace to help this process, there are yet not comprehensive enough and fail to provide today’s dynamic businesses the much needed flexibility and power of customization. The need of the hour is a comprehensive Test/QA maturity assessment model, which not only answers the requirements of customization and flexibility, but also ensures relevance in today’s complex delivery structures of multi-vendor scenarios, multi-location engagements, global delivery models, etc.<|endoftext|>
---
Page: 2 / 8
---
and revolutionary technology trends have changed the role of IT organizations in supporting business growth. Though the recession created a scarcity of capital for IT investments, the demands and expectations on the ability of IT to quickly adapt and support business, has only increased multifold. In addition, rapid/ revolutionary changes in technologies are forcing companies to recast their entire IT landscape. All these factors together have created a complex environment where the demand for change from the business is high, the capital for investment is scarce and time-to-market is critical to success. The constant change and ever growing complexity of the business environment, and the risks associated, have necessitated that organizations make thorough and periodic introspection of their processes and delivery capabilities to ensure operation in an efficient, effective and agile manner. A key requirement, amongst all, is the ability to ensure that there are adequate controls in place to ensure quality in the processes and the outcomes, no matter the extent of change being introduced in business or technology landscape of the organization. This is where the Test/QA organization’s capability comes under the scanner. An organization’s ability to assure and control quality of its IT systems and processes largely determines the success or failure of the business in capturing, servicing and expanding its client base. When Quality and reliability play a very significant role in determining the current and future course of business outcomes delivered, it is imperative that the Quality Assurance function itself is evaluated periodically for the relevance, effectiveness and efficiency of the processes, practices and systems. An objective self-introspection is the ideal first step. However, most often than not, QA organizations fall short of using this process to unearth gaps in their current systems and practices. Also, many organizations may have lost touch with the ever-evolving world of QA to be aware of the leading practices and systems available today. This necessitates an independent assessment of the organization’s QA practices to benchmark it against the practices prevalent in the industry and to get that all-important question “where do we stand in comparison with Industry standards?” answered. Also, the assessment of maturity in testing processes becomes critical in laying out the blue-print for a QA/ testing transformation program that would establish the function as a fit-for-purpose one, and often world-leading. The global economic crisis External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
Limitations of traditional approaches to Test/QA Maturity Assessment There are several models, proprietary and others, available for assessing the maturity of the IT processes and systems, including quality assurance. Most of these are developed and promoted as models helping an organization to certify capabilities in one or more areas of the software development lifecycle. Like all other models and frameworks that lead to certification, these maturity models too have a fixed framework for an organization to operate within, and provide very little flexibility to address specific assessment needs. Further, these models fail to help organizations assess overall process maturity due to the following limitations: Inability to accommodate and account for heterogeneous delivery structures Over the last decade or so, most organizations have evolved into a heterogeneous composition of internal staff and service providers, delivering services through global delivery models with diverse talent, disparate processes, etc. All this has made assessing an organization’s process maturity increasingly difficult. The existing maturity models in the marketplace are not flexible enough to accommodate for these complex delivery structures created through multi-vendor scenarios, multi-location engagements focusing on selective parts of the software development lifecycle, etc. This significantly reduces the overall effectiveness of the output provided by the existing maturity model and its applicability to the client situation.<|endoftext|>Focused on comprehensive certification rather than required capabilities Most conventional models are “certification focused” and can help organizations in assessing their IT process capabilities and getting certified. They are exhaustive in the coverage of process areas and answer the question, “how comprehensive are the processes and practices to service a diverse sets of users of the QA services?”. Such a certification is often a much needed qualification for IT service provider organizations to highlight their process capability and maturity to diverse clients and prospects. However, most non-IT businesses maintain IT divisions to support their business and are more interested in selectively developing the required capabilities of their respective IT groups, leading to efficient business processes and better business outcomes. Hence 1 External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
the focus of maturity assessments in these organizations is not certification, but the ability to deliver specific business outcomes. Since the traditional assessment models are often certification-focused, most non-IT businesses find it an overhead to go through an exhaustive assessment process that does not help them answer the question, “how effective are my organization’s QA processes and practices to ensure quality of my business outcomes? Staged Vs Continuous model for growth in maturity Majority of certification models follow a staged approach, which means that the organization has to satisfy all the requirements of a particular level and get certified in the same, before becoming eligible for progress to the next level. But, most organizations are selective in their focus and want to develop those areas that are relevant and necessary to their business, rather than meeting all the requirements just to get certified at a particular level. Because of the staged approach to certification, such maturity models do not present organizations with a good view of where their current capabilities stand with respect to what is needed by the organization.<|endoftext|>Lack of focus on QA The existing maturity models primarily focus on software development, and treat testing as a phase in the Software Development Lifecycle. However, today, testing has evolved as a mature and specialized discipline in the software industry and hence the ability of the traditional models to assess the QA/testing processes and practices to the required level of detail is very limited. They fall short of organizations that have realized the need/ importance for an independent testing team and want to manage the QA maturity mapping process as an independent entity. Hence, the various dimensions of the test organization should be given adequate focus in the maturity assessment approach covering the Process, People and Technology aspects of testing.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
A comprehensive model to assessing an organization’s test capabilities and the ability to handle transformational programs Now that we have looked at the shortcomings of the traditional models of QA assessment, it is time to answer that all-important question, “what should a comprehensive model for assessing QA/ Test maturity be like?”. The key attributes of a comprehensive QA/Test assessment framework/model can be summed up as follows: Provide business-comprehensible decision-aiding results The model should allow for selective assessment of the relevant parameters for maturity, in the context of business. The results of the assessment should help the business identify and plot the possibility of immaturity in their systems and processes, using lead indicators that have a negative impact on the business. These indicators should help the senior management to decide whether to go for a detailed assessment of maturity, before any adverse effect on business is felt. Choice of business-relevant factors and focus areas The model should be flexible enough to provide the right level of focus on the various factors, business deems relevant, that contribute to the overall maturity index. For example, an organization which depends on one or a set of service providers for their key |
Continue # Infosys Whitepaper
IT services may want strong governance and gating mechanisms. While, another organization that does testing in-house, and leverages vendors for development, will have a much wider focus on maturity in processes and practices. Basically, the model should be flexible enough to account for the intent of assessment, as outlined by the organization.<|endoftext|>Detailed and comprehensive view of areas of improvement and strengths The model should also be one that helps determine the maturity of the testing organization in a detailed manner. The methods and the systems of the model should provide a robust mechanism of objectively calculating the maturity level of the testing organization, based on the behaviors exhibited by the organization. It should provide the members of the QA organization with a detailed view of the areas of strength (and hence to be retained) and the areas of improvement. The model should enable the testing organization to understand the measures that should be implemented at the granular level, rather than at a high-level and thereby help the organization to focus on their key QA dimensions, 2 External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
and strengthen the maturity of these dimensions.<|endoftext|>A frame of reference for improvement initiatives The comprehensive maturity model should provide the organization with a roadmap to move its QA/Testing processes and practices to a higher level of maturity and effectiveness. It should provide a reference framework for selective improvement of capabilities, keeping in mind the business context and organizational objectives. This will help the organization design a roadmap for improvement and devise ways to implement the same effectively.<|endoftext|>Conclusion So, in order to meet the needs of a dynamic business environment and rapidly evolving technology space, IT organizations need to respond quickly and efficiently with high-quality, high- reliability and cost-effective processes and systems. This calls for a robust and scalable QA organization that can guard and ensure the quality of solutions that are put into operation, and assess itself on its capabilities and maturity, periodically to ensure business-relevance and effectiveness.<|endoftext|>Hence a comprehensive QA maturity model, which assists organizations in this assessment, should move away from certification-based models with “generic” and “hard-to-customize” stages, to a model that is adaptable to the context in which business operates. It needs to be a model that evaluates factors that influence maturity and quality of processes at a detailed level and helps the organization to embed quality and maturity in processes, governance and development of key competencies. This would help ensure the maturity of operations and promote continuous improvement and innovation throughout the organization.<|endoftext|>3 External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
About the
Authors Reghunath Balaraman (Reghunath_Balaraman@infosys.com) Reghunath Balaraman is a Principal Consultant, and has over 16 years of experience. A post graduate in Engineering and Management, he has been working closely with several large organizations to assess the maturity of their test and QA organizations and to help them build mature and scalable QA organizations. Raghunath is also well versed with several industry models for assessing maturity of software testing.<|endoftext|>Harish Krishnankutty (Harish_T@infosys.com) Harish Krishnankutty is a Industry Principal in Infosys’ Independent Validation Solutions unit and has over 14 years of experience in the IT industry. His area of specialization is QA consulting and program management and has extensive expertise in the design and implementation of Testing Centers of Excellence/ Managed QA services. Currently, he focuses on the development of tools, IP and services in different areas of specialized testing, including Test Automation, SOA Testing, Security Testing, User Experience Testing, Data Warehouse Testing and Test Data Management.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: DAST Automation for Secure, Swift DevSecOps Cloud Releases
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
Abstract DevSecOps adoption in the cloud goes well beyond merely managing continuous integration and continuous deployment (CI/CD) cycles. Its primary focus is security automation. This white paper examines the barriers organizations face when they begin their DevSecOps journey, and beyond. It highlights one of the crucial stages of security testing known as Dynamic Application Security Testing (DAST). It explores the challenges and advantages of effectively integrating DAST into the CI/ CD pipeline, on-premises and in the cloud. The paper delineates the best practices for DAST tool selection and chain set-up, which assist in shift-left testing and cloud security workflows that offer efficient security validation of deployments with risk- based prompt responses.<|endoftext|>DAST AUTOMATION FOR SECURE, SWIFT DEVSECOPS CLOUD RELEASES WHITE PAPER
---
Page: 2 / 8
---
Traditional security practices involve security personnel running tests, reviewing findings, and providing developers with recommendations for modifications. This process, including threat modeling, conducting compliance checks, and carrying out architectural risk analysis and management, is time-consuming and incongruous with the speed of DevOps. Some of these practices are challenging to automate, leading to a security and DevOps imbalance. To overcome these challenges, many organizations have shifted to an agile DevOps delivery model. However, this exerts significant pressure on DevOps to achieve speed with security as part of the CI/CD pipeline. As a result, release timelines and quality have been impacted due to the absence of important security checks or the deployment of vulnerable code under time pressure.<|endoftext|>Even as DevOps was evolving, the industry concurrently fast- tracked its cloud transformation roadmap. Most organizations shifted their focus to delivering highly scalable applications built on customized modern architectures with 24/7 digital services. These applications include a wide-ranging stack of advanced tiers, technologies, and microservices, backed by leading cloud platforms such as AWS, GCP, and Azure. Despite the accelerated digital transformations, a large number of organizations continue to harbor concerns about security. The year- end cybercrime statistics provide good reason to do so: 1. The global average cost of a data breach is an estimated US $4.35 million, as per IBM’s 2022 data breach report1 2. Cybercrime cost the world US $7 trillion in 2022 and is set to reach US $10.5 trillion by 2025, according to Cybersecurity Ventures2 Evidently, security is an important consideration in cloud migration planning. Speed and agility are imperatives while introducing security to DevOps processes. Integrating automated security checks directly into the CI/CD pipeline enables DevOps to evolve into DevSecOps. DevSecOps is a flexible collaboration between development, security, and IT operations. It integrates security principles and practices into the DevOps life cycle to accelerate application releases securely and confidently. Moreover, it adds value to business by reducing cost, improving the scope for innovation, speeding recovery, and implementing security by design. Studies project DevSecOps to reach a market size of between US $20 billion to US $40 billion by the end of 2030.<|endoftext|>Background
---
Page: 3 / 8
---
As enterprises race to get on the DevSecOps bandwagon, IT teams continue to experience issues: • 60% find DevSecOps technically challenging 3 • 38% report a lack of education and adequate skills around DevSecOps 3 • 94% of security and 93% of development teams report an impact from talent shortage 1 Some of the typical challenges that IT teams face when integrating security into DevOps on-premise or in the cloud are: People/culture challenges: • Lack of awareness among developers on secure coding practices and processes • Want of collaboration and cohesive skillful teams with development, operations, and security experts Process challenges: • Security and compliance remain postscript • Inability to fully automate traditional manual security practices to integrate into DevSecOps • Continuous security assessments without manual intervention Tools/technology challenges: • Tool selection, complexity, and integration problems • Configuration management issues • Prolonged code scanning and consumption of resources DevSecOps implementation challenges Focusing on each phase of the modern software development life cycle (SDLC) can help strategically resolve DevSecOps implementation challenges arising from people, processes, and technology. Integrating different types of security testing for each stage can help overcome the issues more effectively (Figure 1). Figure 1: Modern SDLC with DevSecOps and Types of Security Testing Solution PLAN Requirements CODE Code Repository BUILD CI Server TEST Integration Testing RELEASE Artifact Repository DEPLOY CD Orchestration OPERATE Monitor Threat Modelling Software Composition Analysis and Secret Management Secure Code Analysis and Docker Linting Dynamic Application Security Testing Network Vulnerability Assessments System/Cloud Hardening Cloud Configuration Reviews
---
Page: 4 / 8
---
What is DAST? DAST is the technique of identifying the vulnerabilities and touchpoints of an application while it is running. DAST is easy even for beginners to get started on without in-depth coding experience. However, DAST requires a subject matter expert (SME) in the area of security to configure and set up the tool. An SME with good spidering techniques can build rules and configure the correct filters to ensure better coverage, improve the effectiveness of the DAST scan, and reduce false positives.<|endoftext|>Best practices to integrate DAST with CI/CD The last few years have shown that next-generation CX requires heavy doses of perseverance and attitudinal focus. At Infosys, we have extended this to the way we deliver projects by relying on a few key cultural principles: Besides adopting best practices, the CI/CD environment needs to be test-ready. A basic test set-up includes: There can be several alternatives to the set-up based on the toolset selection. The following diagram depicts a sample (see Figure 2).<|endoftext|>Figure 2: DevSecOps Lab Set-up • Integrate DAST scan in the CI/CD production pipeline after provisioning the essential compute resources, knowing that the scan will take under 15 minutes to complete. If not, create a separate pipeline in a non-production environment • Create separate jobs for each test in the case of large applications. E.g., SQL injection and XSS, among others • Consider onboarding an SME with expertise in spidering techniques, as the value created through scans is directly proportional to the skills exhibited • Roll out security tools in phases based on usage, from elementary to advanced • Fail builds that report critical or high-severity issues • Save time building test scripts from scratch by leveraging existing scripts from the functional automation team • Provide links to knowledge pages in the scan outputs for additional assistance • Pick tools that provide APIs • Keep the framework simple and modular • Control the scope and false positives locally instead of maintaining a central database • Adopt the everything-as-a-code strategy as it is easy to maintain Developer machine for testing locally Code repository for version controlling CI/CD server for integrations and running tests with the help of slave/runner Staging environment
---
Page: 5 / 8
---
Right tool selection With its heavy reliance on tools, DevSecOps enables the automation of engineering processes, such as making security testing repeatable, increasing testing speed, and providing early qualitative feedback on application security. Therefore, selecting the appropriate security testing tools for specific types of security testing and applying the correct configuration in the CI/CD pipeline is critical.<|endoftext|>Challenges in tool selection and best practices Common pitfalls • Lack of standards in tool selection • Security issues from tool complexity and integration • Inadequate training, skills, and documentation • Configuration challenges Best practices in tool selection • Expert coverage of tool standards • Essential documentation and security support • Potential for optimal tool performance, including language coverage, open source or commercial options, the ability to ignore issues, incident severity categories, failure on issues, and results reporting feature • Cloud technology support • Availability of customization and integration capabilities with other tools in the toolchain • Continuous vulnerability assessment capability Best practices in tool implementation • Create an enhanced set of customized rules for tools to ensure optimum scans, and reliable outcomes • Plan incremental scans to reduce the overall time taken • Use artificial intelligence (AI) capabilities to optimize the analysis of vulnerabilities reported by tools • Aim for zero-touch automation • Consider built-in quality through automated gating of the build against the desired security standards After selecting the CI/CD and DAST tools, the next step is to |
Continue # Infosys Whitepaper
set up a pre-production or staging environment and deploy the web application. The set-up enables DAST to run in the CI/CD pipeline as a part of integration testing. Let us consider an example using the widely available open-source DAST tool, Zed Attack Proxy (ZAP). Some of the key considerations for integrating DAST in the CI/CD pipeline using ZAP (see Figure 3) are listed below: • Test on the developer machine before moving the code to the CI/CD server and the Gitlab CI/CD • Set up the CI/CD server and Gitlab. Ensure ZAP container readiness with Selenium on Firefox, along with custom scripts • Reuse the functional automation scripts, only modifying them for security testing use cases and data requirements • Push all the custom scripts to the Git server and pull the latest code. Run the pipeline after meeting all prerequisites
---
Page: 6 / 8
---
Some of the key considerations for integrating DAST in the CI/CD pipeline using ZAP (see Figure 3) are listed below: • Test on the developer machine before moving the code to the CI/CD server and the Gitlab CI/CD • Set up the CI/CD server and Gitlab. Ensure ZAP container readiness with Selenium on Firefox, along with custom scripts • Reuse the functional automation scripts, only modifying them for security testing use cases and data requirements • Push all the custom scripts to the Git server and pull the latest code. Run the pipeline after meeting all prerequisites
---
Page: 7 / 8
---
DevSecOps with DAST in the cloud Integrating DAST with cloud CI/CD requires a different approach. Approach: • Identify, leverage, and integrate cloud-native CI/CD services, continuous logging and monitoring services, auditing, and governance services, as well as operation services with regular CI/CD tools – mainly DAST • Control all CI/CD jobs with server and slave architecture by using containers, such as Docker, to build and deploy applications as cloud orchestration tools.<|endoftext|>An effective DAST DevSecOps in cloud architecture appears as shown in Figure 4: Best practices • Control access to pipeline resources using identity and access management (IAM) roles and security policies • Encrypt data at transit and rest always • Store sensitive information, such as API tokens and passwords, in the Secrets Manager Key steps 1. The user commits the code to a code repository 2. The tool builds artifacts and uploads them to the artifact library 3. Integrated tools help perform the SCA and SAST tests 4. Reports of critical/high-failure vulnerabilities from the SCA and SAST scans go to the security dashboard for fixing 5. Code deployment to the staging environment takes place if reports indicate “no or ignore vulnerabilities” 6. Successful deployment triggers a DAST tool, such as the OWASP ZAP, for scanning 7. User repeats steps 4 to 6 in the event of a vulnerability detection 8. If no vulnerabilities are reported, the workflow triggers an approval email.<|endoftext|>9. Receipt of approval schedules automatic deployment to production Figure 4: DAST DevSecOps in Cloud Workflow
---
Page: 8 / 8
---
© 2023 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected About the authors Kedar J Mankar Kedar J Mankar is an Infosys global delivery lead for Cyber Security testing with Infosys. He has extensive experience across different software testing types. He has led large size delivery and transformation programs for global Fortune 500 customers and delivered value through different COEs with innovation at core. He has experience working and handling teams in functional, data, automation, DevOps, performance and security testing across multiple geographies and verticals.<|endoftext|>Amlan Sahoo Amlan Sahoo has an overall 27+ years in IT industry in application development and testing. He is currently the head of Cyber Security testing division. He has a proven track record in managing and leading transformation programs with large teams for Fortune 50 clients, managing deliveries across multiple geographies and verticals. He also has 4 IEEE and 1 IASTED publications to his credit on bringing efficiencies in heterogeneous software architectures.<|endoftext|>Vamsi Kishore Vamsi Kishore Sukla is a Security consultant with over 8 years of professional experience in the security field, specializing in application security testing, cloud security testing, network vulnerability assessments following OWASP standards and CIS benchmarks. With a deep understanding of the latest security trends and tools, he provides comprehensive security solutions to ensure the safety and integrity of organization and clients.<|endoftext|>Conclusion DevOps is becoming a reality much faster than we anticipate. However, there should be no compromise on security testing to avoid delayed deployments and the risk of releasing software with security vulnerabilities. Successful DevSecOps requires integrating security at every stage of DevOps, enabling DevOps teams on security characteristics, enhancing the partnership between DevOps teams and security SMEs, automating security testing to the extent possible, and shift-left security for early feedback. By leveraging the best practices recommended in this paper, organizations can achieve a more secure and faster release by as much as 15%, both on-premises and in the cloud.<|endoftext|>References 1. https://www.cobalt.io/blog/cybersecurity-statistics-2023 2. https://cybersecurityventures.com/boardroom-cybersecurity-report/ 3. https://strongdm.com/blog/devsecops-statistics
***
|
# Infosys Whitepaper
Title: Data Archival Testing
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 4
---
WHITE PAPER DATA ARCHIVAL TESTING Abstract Today, there is an exponential rise in the amount of data being generated by organizations. This explosion of data increases IT infrastructure needs and has an immense impact on some important business decisions that are dependent on proficient data analytics. These challenges have made data archival extremely important from a data management perspective. Data archival testing is becoming increasingly important for businesses as it helps address these challenges, validate the accuracy and quality of archived data and improve the performance of related applications. The paper is aimed at helping readers better understand the space of data archival testing, its implementation and the associated benefits.<|endoftext|>
---
Page: 2 / 4
---
Introduction One of the most important aspects of managing a business today is managing its data growth. On a daily basis, the cost of data management outpaces the data storage costs for most organizations. Operational analytics and business intelligence reporting usually require active operational data. Data that does not have any current requirement or usage, known as inactive data, can be archived to a safe and secure storage. Data archiving becomes important for companies who want to manage their data growth, without compromising on the quality of data that resides in their production systems.<|endoftext|>Many CIOs and CTOs are reworking on their data retention policies and their data archival and data retrieval strategies because of an increased demand for data storage, reduced application performance and the need to be compliant with the ever changing legislations and regulations.1 Data Archival Testing – Test Planning Data Archival is the process of moving data that is not required for operational, analytical or reporting purposes to offline storage. A data retrieval mechanism is developed to restore data from the offline storage.<|endoftext|>The common challenges faced during data archival are: • Inaccurate or irrelevant data in data archives • Difficulty in the data retrieval process from the data archives Data archival testing helps address these challenges. While devising the data archival test plan, the following factors need to be taken into consideration: Data Dependencies There are many intricate data dependencies in an enterprise’s architecture. The data which is archived should include the complete business objects along with metadata, that helps retain the referential integrity of data across related tables and applications. Data archival testing needs to validate that all related data is archived together for easy interpretation, during storage and retrieval. Data Encoding The encoding of data in the archival database depends on the underlying hardware for certain types of data. To illustrate, data archival testing needs to ensure that the encoding of numerical fields such as integers also archives the related hardware information, for easier future data retrieval and display of data with a different set of hardware.<|endoftext|>Data Retrieval Data needs to be retrieved from archives for regulatory, legal and business needs. The validation of the data retrieval process ensures that the archived data is easily accessed, retrieved and displayed in a format which can be clearly interpreted without any time consuming manual intervention.<|endoftext|>Data Archival Testing – Implementation The data archival testing process includes validating processes which encompass data archival, data deletion and data retrieval. Figure 1 below describes the different stages of a data archival testing process, the business drivers, the different types of data that can be archived and the various offline storage modes. Figure 1: The Data Archival Testing Process External Document © 2018 Infosys Limited
---
Page: 3 / 4
---
Test the Data Archival process 1 • Testing the Data Archival process ensures that business entities that are archived includes master data, transaction data, meta data and reference data • Validates the storage mechanism and that the archived data is stored in the correct format. The data also has to be tested for hardware independence Test the Data Deletion process 2 • Inactive data needs to be archived and moved to a secure storage for retrieval at a later point and then deleted from all active applications using it. This validation process would verify that that the test data deletion process has not caused any error to any existing applications and dashboards • When archived data is deleted from systems, verify that the applications and reports are conforming to their performance requirements Test the Data Retrieval process 3 • Data that has been archived needs to be easily identified and accessible in case of any legal or business needs • For scenarios that involve urgent data retrievals, processes for the same need to be validated within a defined time period Benefits of Data Archival Testing The benefits of data archival testing are often interrelated and have a significant impact on the IT infrastructure costs for a business. Some of the benefits are: Accomplishing all these benefits determine the success of a data archival test strategy.<|endoftext|>Reduced storage costs Improved application performance Minimize business outages Data Compliance Only the data that is relevant gets archived and for a defined time period which reduces hardware costs and its maintenance costs significantly.<|endoftext|>Data is retrieved faster; the network performs better as only relevant data is present in the production environment. All these factors enhance application performance.<|endoftext|>Archived data that is deleted from production systems does not have an impact on the related applications’ performance and functionality, leading to smooth business operations.<|endoftext|>Easy retrieval and availability of archived data ensures higher data compliance with the legal and regulatory requirements.<|endoftext|>Conclusion Due to the critical business needs for data retention, regulatory and compliance requirements and a cost effective way to access archived data, many businesses have started realizing and adopting data archival testing. Therefore an organization’s comprehensive test strategy needs to include a data archival test strategy which facilitates smooth business operations, ensures fulfillment of all data requirements, maintains data quality and reduces infrastructure costs.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 4 / 4
---
About the
Author Naju D. Mohan Naju is a Group Project Manager with Infosys with about 15 years of IT experience. She is currently managing specialized testing services like SOA testing, Data Warehouse testing and Test Data Management for many leading clients in the retail sector.<|endoftext|>REFERENCES 1. ‘Data overload puts UK retail sector under pressure’, Continuity Central, February 2009 2. ‘ Data Archiving, Purging and Retrieval Methods for Enterprises’, Database Journal, January 2011 © 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys POV
Title: Data Imperatives in IT MA&D in Life Sciences Industry
Author: Infosys Consulting
Format: PDF 1.7
---
Page: 1 / 10
---
An Infosys Consulting Perspective Consulting@Infosys.com | InfosysConsultingInsights.com DATA IMPERATIVES IN IT MA&D IN LIFE SCIENCES INDUSTRY
---
Page: 2 / 10
---
2 FOREWORD Larger macroeconomic headwinds (first the pandemic, then rising interest rates and now recessionary fears) are pushing organizations to resort to mergers, acquisitions, and divestitures (MA&D) as a strategic lever to achieve higher market share, acquire new capabilities, or/and refocus strategy on core business to improve financial performance. The average annual global MA&D value was approximately $3.6 trillion in 2011-20 cycle and increased to $5.9 trillion in 2021, highlighting the growing importance of MA&D in meeting future business needs. The life sciences industry is increasingly looking at MA&Ds to acquire new specialty / generic drug line (and related market pipeline) in pharmaceuticals, specialized capabilities in diagnostics and digital health sector, niche research and development capabilities for effective drug discovery around “specialty drugs” and patented IP data around experimental drugs.<|endoftext|>There is growing emphasis on antitrust regulations, regulatory reporting, disclosure requirements, and overall deal approval processes. Compliance with these directly relates to the way entity data is managed (before and after MA&D transaction). Multiple data types including financial, operational, people, supplier, and customer data come into remit. This requires organizations to carefully design and execute their data strategy. There are multiple examples from the industry which showcase that despite growing importance of data strategy in MA&D transactions, just 24% of organizations included CIOs in pre-merger planning4. Abbott Laboratories’ acquisition of Alere got delayed due to regulatory concerns on market concentration1 and anti-competition2. Pfizer and Allergan terminated their planned merger due to the change in treasury rules that made tax benefits less attractive3 and many more.<|endoftext|>Data strategy design and execution start with definition of business metrics and alignment on value measurement approach. After metrics are defined and accepted, linkage to source systems, standardization of data element definitions and management of meta-data along with master data ownerships are key to accurately measuring and interpreting these metrics. Data qualification, especially in regulated industries, is critical to understand and managing qualified data (GxP) and related platforms & applications involved. Finally, a performance oriented and scalable data integration methodology followed by an overarching process and governance mechanism is necessary for ensuring ongoing quality and compliance.<|endoftext|>A poorly designed data strategy and execution often leads to ambiguous understanding of key metrics and underlying data elements, incongruent data standards and unclear ownerships - resulting in faulty data integration, inaccurate transaction records, and ultimately unreliable insights and legal complications. An effective way to overcome these pitfalls is to define a robust data design and execution strategy covering key elements addressing distinctive needs of the life sciences industry.<|endoftext|>Data Imperatives in IT MA&D in Life Sciences Industry | © 2023 Infosys Consulting
---
Page: 3 / 10
---
Mergers, acquisitions, and divestitures (MA&D) are strategic channels for growth. Multiple benefits can be achieved through an effective MA&D transaction, including exponential growth, entry to newmarkets, optimized cost savings, and improved competitiveness.<|endoftext|>The life sciences industry has been experiencing rapid growth and transformation in recent years, fueled by innovations in R&D, regulatory changes, and technological advancements in provider and payer domains. MA&D transactions have become a vital strategic tool for organizations to expand their portfolios, access new markets and improve their competitive positions. Given the complex nature of MA&D transactions, it requires comprehensive due diligence and planning before, during, and after the transaction. Data, being the fundamental building block of any organization, is a critical factor in this due diligence and planning. It is also one of the commonly overlooked factors. In this article, we highlight key elements of data strategy and design within a MA&D transaction and typical pitfalls along with ways to overcome them.<|endoftext|>MA&D transactions in the life sciences industry are increasingly subject to higher scrutiny from regulatory bodies to ensure greater transparencies and better shareholder and consumer protection. There are three key regulation types which are in place: 1. Greater financial and operationaltransparencies: A. India - Foreign Exchange Management Act (FEMA), SEBI Laws B. USA – Securities Act, Securities Exchange Act C. Europe – European Union Merger Law 2. Better intellectual property protection: Patents, trademarks, copyrights, trade secrets, designs, data protection.<|endoftext|>3. Higher fair play and consumer protection: A. India – Competition Act B. USA – Federal Antitrust Laws C. Europe – Competition Law According to an analyst report, the average MA&D failure rate is ~70%4. A key reason for this high failure rate is difficulty in integrating the two entities,5 especially w.r.t culture, operational ways of working, revenue recognition and performance incentives. All these aspects are directly impacted by the way data is designed and managed. Despite the importance, just 24% of organizations included CIOs in pre-merger planning3. Effective data management is key to adhering to these regulatory requirements and ensuring that data is properly collected, analyzed, and reported throughout the transaction process.<|endoftext|>Introduction 3 Data Imperatives in IT MA&D in Life Sciences Industry | © 2023 Infosys Consulting
---
Page: 4 / 10
---
MA&Ds are fundamentally complex transactions that impact business entities, systems, processes, and data of the organizations involved. There are eight elements which underpin data strategy and execution.<|endoftext|>Fig 1 – Key elements of data strategy within a MA&D transaction. 1. Business metrics and measurement: Defining metrics to evaluate the performance of the target entity is critical. It is important that all entities involved in the transaction clearly define and agree upon the metrics which define success; noteworthy metrics in the life sciences industry include clinical trial outcomes, regulatory approval timelines, molecule discovery rates, drug pipeline progress and GxP compliance metrics. These metrics articulate objectives and key results of the target entity. Agreement on accurate metrics and their measurement logic improves operational and financial transparencies, thereby promoting adoption of the integration / divestiture decision.<|endoftext|>2. Data policies and standards: It is essential to establish a common set of data standards and policies to maintain data assets in the target environment. This involves defining standard data formats, structures, and rules for data management and establishing governance policies to ensure security, privacy, compliance, and protection of data. In a merger or a divestiture scenario, data policies for resulting entities are driven by target business needs and operational requirements.<|endoftext|>Key elements of data strategy and execution 4 Data Imperatives in IT MA&D in Life Sciences Industry | © 2023 Infosys Consulting
---
Page: 5 / 10
---
3. Metadata management: Metadata helps to classify, manage, and interpret master data.<|endoftext|>Managing metadata is essential in ensuring standardization of data elements across systems e.g., customer ID, distribution channel codes, clinical trial identifiers, drug classification codes, etc.<|endoftext|>Effective metadata management promotes improved data consistency, better data quality, governance, compliance, and security. Like data policies and standards, metadata management standards are driven by target business needs and operational requirements.<|endoftext|>4. Master data management: Data ownership is crucial in MA&Ds because it determines necessary accountabilities and responsibilities towards maintaining the data assets.<|endoftext|>Defining master data ownership during the pre/post-close phases is critical in ensuring smooth transition to integrated operations6. All parties involved must align on a clear ownership on gaining access, maintaining, and governing the master data assets after the transaction. Establishing data stewardship roles and processes to maintain master data is essential to avoid pitfalls such as delays in integration, legal disputes, and potential regulatory penalties. In addition, clear data ownership contributes to better intellectual property protection in a MA&D transaction. This ownership also means managing data at a product level with a promise of a required level of data quality, making it easier for users to extract valuable insights and intelligence.<|endoftext|>5. Data lineage management: MA&D transactions create large data assets, which increasingly become interconnected, complex, and challenging to work with. Data lineage tracks flow of data from source to destination, noting any changes in its journey across different systems. This allows for tracing data origins, evaluating data accuracy and pinpointing potential risks, enabling risk management, thus elevating probability of success of MA&D transactions.<|endoftext|>6. Data qualification: A crucial element for consideration is qualification of data into GxP and non-GxP. GxP data is subject to stringent regulations, while non-GxP data has fewer regulatory constraints. Proper data qualification enables organizations to manage GxP data in compliance with regulatory guidelines and handle non-GxP data as appropriate for its intended use. This helps in adoption of efficient data management processes especially from an extract, transform and load perspective. It also emphasizes relevance of systems that will hold the regulatory |
Continue # Infosys POV
data thus ensuring required controls in place when interacting with such systems.<|endoftext|>5 Data Imperatives in IT MA&D in Life Sciences Industry | © 2023 Infosys Consulting
---
Page: 6 / 10
---
7. Data integration: Effective integration of data across systems such as clinical trial databases, product development pipelines, and sales and marketing platforms into a single, unified environment is critical for the new entity to make effective decisions. Integration of data requires consistent understanding of data and minimization of data redundancies. This helps the new entity gain better and more accurate understanding of its business and operational data, thereby expediting envisioned synergy realization. It also increases operational efficiency by streamlining internal processes, reducing duplication of effort, thereby improving risk profile. Effective data integration is essential for achieving information protection and transparency in a MA&D transaction.<|endoftext|>8. Data governance: Data governance is a crucial element for managing “data at rest” and “data in motion”. Robust data governance establishes policies, processes and controls to manage data throughout the life cycle. An effective data governance framework ensures both “data in motion” and “data at rest” are adequately protected while tracking data health in a near real time manner, thereby fostering trust with regulators, customers, and partners.<|endoftext|>Common pitfalls in a MA&D and ways to overcome them Data design and execution to support an integration / divestiture transaction is often complicated and stressful. However, with the right interventions, organizations can navigate around these complications. A non-effective data strategy can have far-reaching consequences, such as reduced financial and operational transparency, compromised intellectual property protection, decreased fair play among entities involved and weakened consumer protection. We have identified sixcommon pitfalls and their impact.<|endoftext|>Fig 2 – Critical elements of pitfalls in a MA&D transaction 6 Data Imperatives in IT MA&D in Life Sciences Industry | © 2023 Infosys Consulting
---
Page: 7 / 10
---
1. Data ownership: One of the most common pitfalls in MA&D transactions is limited clarity around ownership of data assets in target state. The issue is particularly pronounced when organizations involved have multiple focus areas with data stored in a single system but without proper segregation and ownership. For example, an organization may have three focus areas such as BioSimilars, BioPharma and Med Devices. Data on these focus areas may be stored in one system but not segregated based on focus areas. MA&D in any one of these areas will impose a significant challenge in terms of data segregation dependency identification. Ambiguities regarding data asset ownership often leads to intellectual property disputes, faulty data integration, and challenges extracting data specific to a new entity. To avoid such confusion, it is essential to establish data ownership early in the transaction and assign data stewards to manage data in rest as well in motion.<|endoftext|>2. Business metrics: Organizations involved in a MA&D transaction may prioritize select GxP and non-GxP metrics based on their distinctive strategic objectives and market priorities. For example, in a MA&D involving a generic and a specialty drug maker, the generic drug maker might emphasize GxP metrics such as manufacturing quality and regulatory submission timelines, as well as non-GxP metrics such as market share and cost efficiency. On the other hand, specialty drug makers might focus on GxP metrics such as clinical trial data quality and patient safety, and non-GxP metrics such as R&D pipeline growth and innovative therapy development. Given these diverse priorities, establishing common performance criteria for the new entity might be a challenge. Moreover, lack of uniformity in underlying logic for measuring the performance may further exacerbate the issue. Organizations must establish uniform metrics and underlying measurement criteria that is reflective of strategic priorities of the target entity.<|endoftext|>3. Data standards: Organizations also face roadblocks when they fail to establish common definitions for data elements. The resulting inconsistency in data standards increases the risk of inaccurate transaction records. Such inaccuracies can impair decision-making during critical stages of the MA&D and might even jeopardize the overall success of the transaction. Creating a unified data dictionary and standardizing data definitions across all entities involved is essential to mitigate such risks.<|endoftext|>4. Data lineage: A common pitfall is related to replication of source data elements across multiple source systems. Replication of data elements in multiple systems increases complexity of managing data and leads to additional synchronization overheads.<|endoftext|>Establishing standardized data lineage practices along with synchronized replication processes through automated tools is key to increasing data congruency.<|endoftext|>7 Data Imperatives in IT MA&D in Life Sciences Industry | © 2023 Infosys Consulting
---
Page: 8 / 10
---
5. Data governance: Another common challenge encountered during MA&Ds arises from ineffective and inconsistent data governance processes. Inconsistent data governance processes decrease accuracy of inferences and insights which can be derived from datasets. A consistent data governance process ensures data protection and regulatory compliance.<|endoftext|>6. Knowledge management: Heavy reliance on individuals makes knowledge retention vulnerable to personnel changes. To overcome this challenge, organizations must develop a knowledge management capability that is not solely dependent on people but facilitated through a set of processes and tools. A robust knowledge management capability enables effective and efficient use of data during the transaction.<|endoftext|>7. A well-designed data strategy is complemented by an effective execution plan. By proactively identifying potential challenges and implementing mitigating solutions, organizations can effectively navigate through the complexities and maximize value realization from a MA&D transaction. Effective data strategy and execution can safeguard the success of the transaction and ensure that the resulting entity(s) operates efficiently and effectively.<|endoftext|>About the CIO advisory practice at Infosys Consulting Over the next 5 years CIOs will lead their organizations towards fundamentally new ways of doing business. The CIO Advisory practice at Infosys Consulting is helping organizations all over the world transform their operating model to succeed in the new normal – scaling up digitization and cloud transformation programs, optimizing costs, and accelerating value realization. Our solutions focus on the big-ticket value items on the C-suite agenda, providing a deep link between business and IT to help you lead with influence.<|endoftext|>8 Data Imperatives in IT MA&D in Life Sciences Industry | © 2023 Infosys Consulting
---
Page: 9 / 10
---
MEET THE AUTHORS Inder Neel Dua inder_dua@infosys.com Inder is a Partner with Infosys Consulting and leads the life sciences practice in India. He has enabled large scale programs in the areas of digital transformation, process re-engineering and managed services.<|endoftext|>Anurag Sehgal anurag.sehgal@infosys.com Anurag is an Associate Partner with Infosys Consulting and leads the CIO advisory practice in India. He has enabled large and medium scale clients to deliver sustainable results from multiple IT transformation initiatives.<|endoftext|>Ayan Saha ayan.saha@infosys.com Ayan is a Principal with the CIO advisory practice in Infosys Consulting. He has helped clients on business transformation initiatives focusing on IT M&A including operating model transformation.<|endoftext|>Manu A R manu.ramaswamy@infosys.com Manu is a Senior Consultant with CIO advisory practice in Infosys Consulting.<|endoftext|>He has assisted clients on technology transformation initiatives in the areas of IT M&A and cloud transformation.<|endoftext|>Sambit Choudhury sambit.choudhury@infosys.com Sambit is a Senior Consultant with the CIO advisory practice in Infosys Consulting. His primary focus areas include enterprise transformation with IT M&A as a lever. He has helped clients in areas of IT due diligence, integration, and divestitures.<|endoftext|>1 FTC Requires Abbott Laboratories to Divest Two Types of Point-Of-Care Medical Testing Devices as Condition of Acquiring Alere Inc.<|endoftext|>2 EU clears Abbott acquisition of Alere subject to divestments | Reuters 3 Pfizer formally abandons $160bn Allergan deal after US tax inversion clampdown | Pharmaceuticals industry | The Guardian 4 Why, and when, CIOs deserve a seat at the M&A negotiating table | CIO 4 The New M&A Playbook - Article - Faculty & Research - Harvard Business School (hbs.edu) 5 Don’t Make This Common M&A Mistake (hbr.org) 6 6 ways to improve data management and interim operational reporting during an M&A transaction 9 Data Imperatives in IT MA&D in Life Sciences Industry | © 2023 Infosys Consulting
---
Page: 10 / 10
---
consulting@Infosys.com InfosysConsultingInsights.com LinkedIn: /company/infosysconsulting Twitter: @infosysconsltng About Infosys Consulting Infosys Consulting is a global management consulting firm helping some of the world’s most recognizable brands transform and innovate. Our consultants are industry experts that lead complex change agendas driven by disruptive technology. With offices in 20 countries and backed by the power of the global Infosys brand, our teams help the C- suite navigate today’s digital landscape to win market share and create shareholder value for lasting competitive advantage. To see our ideas in action, or to join a new type of |
Continue # Infosys POV
consulting firm, visit us at www.InfosysConsultingInsights.com. For more information, contact consulting@infosys.com © 2023 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names, and other such intellectual property rights mentioned in this document. Except as expressly permitted, neither this document nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printed, photocopied, recorded or otherwise, without the prior permission of Infosys Limited and/or any named intellectual property rights holders under this document.
***
|
# Infosys Whitepaper
Title: Need for data masking in a data-centric world
Author: Infosys Limited
Format: PDF 1.6
---
Page: 1 / 4
---
VIEW POINT Abstract With data gaining increasing prominence as the foundation of organizational operations and business, ensuring data security is emerging as a main priority. It is critical to safeguard sensitive data and customer privacy, the lack of which can lead to financial and reputational losses. Thus, there is a rising demand to protect personally identifiable information during transfer within organizations as well as across the external ecosystem. This paper highlights the need for data masking solutions. It also explains how customized data masking solutions can be used in today’s data centric world.<|endoftext|>Paromita Shome, Senior Project Manager, Infosys Limited NEED FOR DATA MASKING IN A DATA-CENTRIC WORLD
---
Page: 2 / 4
---
External Document © 2018 Infosys Limited Introduction The key differentiator for today’s businesses is how they leverage data. Thus, ensuring data security is of utmost importance, particularly for organizations that deal with sensitive data. However, this can be challenging because data that is marked critical and sensitive often needs to be accessed by different departments within an organization. Without a well- defined enterprise-wide data access management strategy, securing data transfer can be difficult. The failure to properly control handling of sensitive information can lead to dangerous data breaches with far-reaching negative effects. For instance, a 2017 report by the Ponemon Institute titled ‘Cost of a Data Breach Study, 2017’1 found that: • The average consolidated total cost of a data breach is US $3.62 million • The average size of a data breach (number of records lost or stolen) increased by 1.8% in the past year • The average cost of a data breach is US $141 per record • Any incident – either in-house, through a third party or a combination of both – can attract penalties of US$19.30 per record. Thus, for a mere 100,000 records, the cost of a data breach can be as high as US $1.9 million These statistics indicate that the consequences of data breaches go beyond financial losses. They also affect the organization’s reputation, leading to loss of customer and stakeholder trust. Thus, it is imperative for organizations to adopt robust solutions that manage sensitive data to avert reputational damage and financial losses.<|endoftext|>External Document © 2019 Infosys Limited
---
Page: 3 / 4
---
External Document © 2018 Infosys Limited Data masking as a solution Data masking refers to hiding data such that sensitive information is not revealed. It can be used for various testing or development activities. The most common use cases for data masking are: • Ensuring compliance with stringent data regulations – Nowadays, there are many emerging protocols that mandate strict security compliance such as Health Insurance Portability and Accountability Act (HIPAA) and General Data Protection Regulation (GDPR). These norms do not allow organizations to transfer Types of data masking There are various masking models or algorithms that can be leveraged to address the above use cases. These ensure data integrity while adhering to masking demands. The most common types are: • Substitution or random replacement of data with substitute data • Shuffling or randomizing existing values personal information such as personally identifiable information (PII), payment card information (PCI) and personal health information (PHI) • Securely transferring data between project teams – With the increasing popularity of offshore models, project management teams are concerned about how data is shared for execution. For instance, sharing production data raises concerns about the risk of data being misused/mishandled during transition. Thus, project teams need to build an environment that closely mimics production environments and can be used for functionality validation. This requires hiding sensitive information when converting and executing production data It is important to note that data sensitivity varies across regions. Organizations with global operations are often governed by different laws. Hence, the demand for data security and the potential impact of any breach differ based on the operating regions. Thus, having an overarching data privacy strategy is paramount to ensure that sensitive data remains protected. This calls for a joint data protection strategy that includes vendors in offshore and near- shore models as well.<|endoftext|>vertically across a data set/column • Data encryption by replacing sensitive values with arithmetically formulated data and using an encryption key to view the data • Deleting the input data for sensitive fields and replacing with a null value to prevent visibility of the data element • Replacing the input value with another value in the lookup table While the above models enable straight- forward masking, they cannot be applied to all cases, thus creating the need for customized data masking. Customized data masking uses an indirect masking technique where certain business rules must be adhered to along with encryption as shown in Fig 1.<|endoftext|>In the figure, the source data – WBAPD11040WF70037 – is received from any source system like RDBMS/Flat files. The business rules state that: 1. There should be no change in the first 10 characters post masking 2. The next 2 letters should be substituted with letters post masking 3. The last 5 numerals should be substituted with numerals only post masking Fig 1: A technical approach to customized data masking External Document © 2019 Infosys Limited Data Masking RDBMS Individual Data (WBAPD11040WF70 037) Data Splitter Data Splits WBAPD11040 WF Individual Data (WBAPD11040AG817 26) Masked Data Source Data Data Concatenation 70037 Masking TDM Tool PLSQL WBAPD11040 AG 81726 Masked Data Splits Files Input Source RDBMS Files Output Source
---
Page: 4 / 4
---
© 2019 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected As part of customized masking, the source data is passed through a data splitter. The single source data is then split into individual source data based on the business rules as shown in Fig 1.<|endoftext|>After this, the individually-split data is run through the data masking tool and the masking algorithm defined in the tool is executed for each item, yielding an output of masked data. In each section, the masking type is selected based on the business rule and then the encryption is applied. The three individual sections of masked data are finally concatenated before being published at the output, which can be a database or a flat file. The masked data for the reference source data now reads as WBAPD11040AG81726. This output data still holds the validity of the source input data, but it is substituted with values that do not exist in line with the business rules. Hence, this can be utilized in any non-production environment.<|endoftext|>Customized masking can be used in various other scenarios such as: • To randomly generate a number to check Luhn’s algorithm where masked data ensures that the source data lies within the range of Luhn’s algorithum • To check number variance in a range between ‘x’ and ‘y’ where the input values will be replaced with a random value between the border values, and the decimal points are changed • To check number variance of around +/- * % where a random percentage value between defined borders will be added to the input value Conclusion As the demand for safeguarding sensitive data increases, organizations need effective solutions that support data masking capabilities. Two key areas where data masking is of prime importance are ensuring compliance with data regulations and protecting data while it is transferred to different environments during testing. While there are several readily-available tools for data masking, some datasets require specialized solutions. Customized data masking tools |
Continue # Infosys Whitepaper
can help organizations hide source data using encryption and business rules, allowing safe transfer while adhering to various global regulatory norms. This not only saves manual effort during testing but averts huge losses through financial penalties and reputational damages arising from data breaches.<|endoftext|>References 1. https://securityintelligence.com
***
|
# Infosys Whitepaper
Title: Infosys Test Automation Accelerator
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 4
---
WHITE PAPER HOW TO ENSURE DATA QUALITY DURING DATA MIGRATION Naju D Mohan, Delivery Manager
---
Page: 2 / 4
---
Introduction In today’s business world, change is the only constant and changes make their appearances in various forms, some of which are: • Mergers and acquisitions • New compliance requirements • New Package implementations • Migration to new technologies such as the cloud • Big data programs Being data driven, the business has to upgrade and keep its intelligence up- to-date to realize the benefits of these changes. So in short, all these changes result in data migrations. Most of the time, it is assumed that data migration is an IT problem 2. All the visible changes and the actions lie with the IT team, so the business moves on, putting the entire burden of data migration management on the IT team. Mergers and acquisitions and compliance requirements clearly stand out as having its origin with the business team. So does the decision to implement a CRM, loyalty or HR package with its beginning at the business department. The need to optimize operating costs and make intelligent decisions and act in real-time, leads the business to migrate to cloud and embark on big data programs. But the onus of the migration management, often, lies with the IT team.<|endoftext|>It must be clearly understood that any data migration without the business leading the program has a high rate of failure. Business has to not just care about data migration but command it.<|endoftext|>Who is the Primary Owner? According to Gartner, 83% of the data migration programs fail to meet expectations, running over time and budget1.<|endoftext|>Some key reasons for this are: 1. Poor Understanding About Data Migration Complexity • The focus on data migration is lost in the excitement of the new package implementation, migration to cloud or big data initiatives • Most often, it is assumed that data fits one-one into the new system • The whole attention is on the implementation of the new business processes with less or almost no focus on data migration 2. Lack of Proper Attention to Data • Lack of data governance and proper tools for data migration can impact the quality of data loaded into the new system • Mergers and acquisitions can introduce new data sources and diverse data formats • Huge volumes of data may force us to overlook whether the data is still relevant for the business 3. Late Identification of Risks • Poor data quality of the source systems and lack of documentation or inaccurate data models would be identified late in the migration cycle • Lack of clarity on the job flows and data integrity relationship across source systems would cause data load failures Why Such a High Failure Rate for Data Migration Programs? External Document © 2018 Infosys Limited
---
Page: 3 / 4
---
An innovative data migration test strategy is critical to the success of the change initiatives undertaken by the business. The test strategy should be prepared in close collaboration with the business team as they are a vital stakeholder, who initiated the change resulting in data migration. The two principal components which should be considered as part of the test strategy are: 1. Risk-Based Testing The data volumes involved in data migration projects emphasize the need for risk-based testing to provide optimum test coverage with the least risk of failure. Master test strategy can be created by ensuring proactive analysis with business and third- parties. Tables can be prioritized and bucketed based on the business criticality and sensitivity of data. Composite key agreed with the business can be used to select sample rows for validation in tables with billions of rows. 2. Data Compliance Testing It is very important that the quality assurance (QA) team is aware of the business requirements that necessitated data migration, because the change would have been to meet new government regulations or compliance requirements. The test strategy must have a separate section to validate the data for meeting all compliance regulations and standards such as Basel II, Sarbanes-Oxley (SOX), etc. Is There a Right Validation Strategy for Data Migration? A Test Approach Giving Proper Attention to Data Data migration, as mentioned earlier, is often a by-product of a major initiative undertaken by the company. So in a majority of scenarios, there would be an existing application which was performing the same functionality. It is suitable to adopt a parallel testing approach which would save effort spent to understand the system functionality. The testing can be done in parallel with development in sprints, following an agile approach to avoid the risk of failure at the last moment.<|endoftext|>1. Metadata Validation Data migration testing considers information that describes the location of each source such as the database name, filename, table name, field or column name, and the characteristics of each column, such as its length and type, etc. as part of metadata. Metadata validation must be done before the actual data content is validated, which helps in early identification of defects which could be repeated across several rows of data.<|endoftext|>2. Data Reconcilliation Use automated data comparison techniques and tools for column to column data comparison. There could be duplicate data in legacy systems and it has to be validated that this is merged and exists as a single entity in the migrated system. Sometimes the destination data stores do not support the data types from the source and hence the storage of data in such columns have to be validated for truncation and precision. There could be new fields in the destination data store and it has to be validated that these fields are filled with values as per the business rule for the entity.<|endoftext|>Benefits A well thought-out data migration validation strategy helps to make the data migration highly predictable and paves the way for a first-time right release. Regular business involvement helps to maintain the testing focus on critical business requirements. A successful implementation of the shift-left approach in the migration test strategy helps identify defects early and save cost.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 4 / 4
---
Case Study: Re-Platforming of Existing HP NEOVIEW Data Warehouse to Teradata The Client One of the largest super market chains in the United Kingdom which offers online shopping, DVD rentals, financial services, and multiple store locations.<|endoftext|>The Objectives • To complete the re-platform of HP Neoview to Teradata and re-platform associated services before HP discontinued support to Neoview • To migrate the existing IT business services currently operating against a Neoview data warehouse onto a Teradata warehouse with minimal disruption • To improve the performance of current Ab-Initio ETL batch processes and reporting services using Microstrategy, SAS, Pentaho, and Touchpoint The QA Solution The validation strategy was devised to ensure that the project delivered a like-for-like, ‘lift-and-shift’. This project had environmental challenges and dependencies throughout the entire execution cycle. SIT phase overcame all the challenges by devising strategies that departed from traditional testing approach in terms of flexibility and agility. The testing team maintained close collaboration with the development and infrastructure teams while maintaining their independent reporting structure. The approach was to maximize defect capture within the constraints placed on test execution.<|endoftext|>It was planned to have individual tracks tested independently on static environment and then have an end- to-end SIT, where all the applications / tracks are integrated. Testing was always focused on migrating key business functions on priority such as sales transaction management, merchandise and range planning, demand management, inventory management, price and promotion management, etc.<|endoftext|>The Benefits • 15% reduction in effort through automation using in-house tools • 100% satisfaction in test output through flexibility and transparency in every testing activity achieved through statistical models to define acceptance baseline End Notes 1. Gartner, “Risks and Challenges in Data Migrations and Conversions,” February 2009 2. https://www.hds.com/go/cost-efficiency/pdf/white-paper-reducing-costs-and-risks-for-data-migrations.pdf © 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document |
Continue # Infosys Whitepaper
. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: End-to-end test automation – A behavior-driven and tool-agnostic approach
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
PERSPECTIVE END-TO-END TEST AUTOMATION – A BEHAVIOR-DRIVEN AND TOOL- AGNOSTIC APPROACH Anand Avinash Tambey Product Technical Architect, Infosys Abstract In today’s fast changing world, IT is under constant pressure to deliver new applications faster and cheaper. The expectation from the quality assurance (QA) organization is to make sure that all the applications are tuned to deliver to every rising user expectation across devices, locations, and typically, at no additional cost. And with an exponential growth in the diversity and number of end users in almost all sectors, requirements are fairly fluctuating and demanding too.<|endoftext|>
---
Page: 2 / 8
---
Let us discuss these approaches in detail to face the above challenges. How to face these challenges Challenges l Technical complexity l Infrastructure, licensing, and training costs l Late involvement of users l Late involvement of testers Approaches l De-skilling l Using open source stack l Using behavior-driven techniques l Utilizing the testers and users effectively This fast-paced software engineering advancement is also posing challenges to software engineers to build an ecosystem that enables rapid prototyping and design, agile development and testing, and fully automated deployment. For the QA community, this translates to a need to maximize automation across all stages of the software engineering and development life cycle and more importantly, do it in an integrated fashion. Consequently, `extreme automation’ is the new mantra for success. And there is more. According to the 2014 State of DevOps report, high-performing organizations are still deploying code 30 times more frequently, with 50 percent fewer failures than their lower-performing counterparts. High IT performance leads to strong business performance, helping boost productivity, profitability, and market share. The report counts automated testing as one of the top practices correlated with reducing the lead time for changes.<|endoftext|>However, automated testing puts together another set of challenges. The latest technologies stack advocates multiple choices of test automation tools, platform- specific add-ins, and scripting languages. There is no inherent support available for generic, tool-agnostic, and scriptless approach with easy migration from one tool to another. Therefore, a significant investment in training, building expertise, and script development is required to utilize these tools effectively. The cost and associated challenges inadvertently affect the time-to-market, profitability, and productivity although it also creates an opportunity to resolve the issues using a combination of an innovative tool-agnostic approach and latest industry practices such as behavior-driven development (BDD) and behavioral-driven test (BDT).<|endoftext|>Business challenges A persistent need of businesses is to reduce the time between development and deployment. QA needs to evolve and transform to facilitate this. And this transformation requires a paradigm shift from conventional QA in terms of automation achieved in each life cycle stage and across multiple layers of architecture. Technical complexity The technology and platform stack is not limited to traditional desktop and the web for current application portfolios. It extends to multiple OS (platforms), mobile devices, and the newest responsive web applications.<|endoftext|>Infrastructure, licensing, and training costs To test diverse applications, multiple test automation tools need to be procured (license cost), testing environment needs to be set up (infrastructure), and the technical skills of the team need to be brought to speed with training and self- learning / experimentation (efforts). Late involvement of users The end user is not involved in the development process until acceptance testing and is totally unaware of whether the implemented system meets her requirements. There is no direct traceability between the requirements and implemented system features. Late involvement of testers Testing and automation also need to start much earlier in the life cycle (Shift- Left) with agility achieved through the amalgamation of technical and domain skills of the team, as well as the end user. External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
Using open-source stack To reduce the cost of commercial tool license and infrastructure, utilize open- source tools and platforms.<|endoftext|>De-skilling Using easy modeling of requirements and system behaviors, accelerated framework, and automated script generators, reduces the learning curve and the dependency on expert technical skills.<|endoftext|>Using behavior-driven techniques Behavioral-driven development and testing (BDD and BDT) is a way to reduce the gap between the end user and the actual software being built. It is also called `specification by example.’ It uses natural language to describe the `desired behavior’ of the system in a common notation that can be understood by domain experts, developers, testers, and the client alike, improving communication. It is a refinement of practices such as test-driven development (TDD) and acceptance test-driven development (ATDD).<|endoftext|>The idea behind this approach is to describe the behaviors of the system being built and tested. The main advantage is that the tests verifying the behaviors reflect the actual business requirements / user stories and generate the live documentation of the requirements, that is, successful stories and features as test results. Therefore, the test results generated can be read and understood by a non-technical person, such as a project sponsor, a domain expert, or a business analyst, and the tests can be validated. Utilizing testers and users effectively Our accelerated automation approach provides a simple modeling interface for a scriptless experience and thereby utilizes non-technical staff effectively. It introduces the `outside-in’ software development methodology along with BDT, which has changed the tester’s role dramatically in recent years, and bridges the communication gap between business and technology. It focuses on implementing and verifying only those behaviors that contribute most directly to the business outcomes.<|endoftext|>Solution approach Our solution approach is threefold, to resolve challenges in a holistic way. It applies a behavior-driven testing approach with a tool-agnostic automation framework, while following an integrated test life cycle vision. It ensures that business users and analysts are involved.<|endoftext|>It provides the flexibility of using any tool chosen and helps save cost and effort. It also provides a simple migration path to switch between tools and platforms, if such a case arises. With an integrated test life cycle approach, it ensures seamless communication between multiple stakeholders and leverages industry- standard tools.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
Finally, it introduces automation in early stages to realize the benefit of Shift-Left.<|endoftext|>Behavior-driven test (BDT) To facilitate real-time traceability of user stories / requirements and to aid live documentation, we have implemented a BDT approach across multiple Infosys projects as described below.<|endoftext|>This approach acted as a single point of continuous interaction between the tester and the business users. The user story is divided into various scenarios by the testers into a feature file.<|endoftext|>l These scenarios are written using Gherkin language. These scenarios include the business situation, the pre-ions, the data to be used, and the acceptance criteria.<|endoftext|>l The end user signs off the features / scenarios. This provides the user control to execute and validate the scenarios based on the data as per the user need and bring up feature reports / dashboard. l The underlying technical implementation is abstracted from the business user.<|endoftext|>l Tester creates the underlying test scripts for the scenarios which could be a business layer test script, service test script, or a UI automated test script.<|endoftext|>l The tool then converts the scenarios to `step definitions,’ which act as the binder between the test scripts and the scenarios. This ensures that a single point / interface is used to execute any type of tests.<|endoftext|>Tester User story End user Zero distance with user User aware of feature success / failures early Shift-Left Early test automation from day one UI Testing framework (BDT) Reports in user's language Feature fles Step defnition 11% Back to Jenkins Tag Overview Cucumber Reports Feature Overview for Build: 15 The following graph shows passing and falling statistics for features in this build: Failed Skipped Pending Passed Steps Scenarios 89% 92% Passed Failed Feature Statistics Cucumber-JVM Jenkins report plugin - 06-07-2012 22:31:55 Account holder withdraws cash project |
Continue # Infosys Whitepaper
2 8 40 40 0 0 0 113 ms passed 1 9 5 1 3 0 4 ms failed 8 40 40 0 0 0 112 ms passed 1 9 5 1 3 0 4 ms failed Account holder withdraws More cash project 2 Account holder withdraws more cash Account holder withdraws cash Feature Scenarios Steps Passed Failed Skipped Pending Duration Status 4 18 98 90 2 6 0 233ms Totals External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
Tool- / platform-agnostic solution (BDD release) Tool-agnostic approach To remove dependencies on technical skills, tools, and platforms, our solution proposes modeling of system behaviors using generic English-like language via an intuitive user interface. This model would be agnostic to any specific tool or UI platform and reusable.<|endoftext|>This model will be translated into a widely acknowledged format, XML, and will act as an input to generate automation scripts for specific tools and platforms. Finally, it would integrate with a continuous integration platform to achieve an end-to- end automation of build-test-deploy cycle.<|endoftext|>BDD modeling UI Scriptless, tool- / platform-agnostic implementation Business analyst / end user User story Features Fields and step defnitions QTP Selenium SAHI Protractor Tester Execution AUT Continuous integration Build Test Deploy Script generator External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
Integrated test life cycle The role of the traditional tester and end users is changing in the era of DevOps and Shift-Left. The integrated-solution approach enables a larger stakeholder Benefits Reduced time-to-market l Shift-Left, early automation, and early life cycle validation l Single-click generation and execution of automated scripts Reduced cost l 40–60 percent reduction in effort for automated test case generation over manual testing l Detailed error reporting reduces defect reporting effort considerably base to contribute towards the quality of the system under development. It also ensures the satisfaction of stakeholders via early validation and early feedback of system progress while using the industry- standard toolsets seamlessly.<|endoftext|>l Easy maintenance of requirements, stories, features, and the automated test suite l No additional cost involved in building integration components for test management tools (HP-ALM) l The agnostic approach works for a broad range of applications irrespective of tools, technology, and platform Improved quality l Enhanced business user participation and satisfaction due to the live documentation of features and user stories, available at fingertips l Developer, tester, and client collaboration possible due to a common language l High defect detection rates (95–99 percent) due to high test coverage Requirement analysis Test design Test execution Test reporting Tester Enhances automation libraries Generates requirements and features HP ALM/QC IBM RQM JIRA Jenkins Others Executes automation scripts Generates test reports Designs business processes Generates automation scripts Tester BA BA Automation expert QA manager / end user BDD & UML modeling tool BDD & UML modeling tool Features and metrics reports Tool-agnostic, platform-agnostic automation accelerator Integration With Popular Tools Test Life Cycle Management View External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
References https://puppetlabs.com/sites/default/files/2014-state-of-devops-report.pdf http://www.ibm.com/developerworks/library/a-automating-ria/ http://guide.agilealliance.org/guide/bdd.html http://www.infosysblogs.com/testing-services/2015/08/extreme_automation_ the_need_fo.html Conclusion Our solution approach is the first step towards reducing the complexity of test automation and making it more useful for the end user by providing early and continuous feedback to the incremental system development. Moreover, it advances automation at every level to achieve rapid development and faster time-to-market objectives.<|endoftext|>With the advent of multiple technologies and high-end devices knocking at the door, using a tool- and platform-agnostic approach will increase overall productivity while reducing the cost of ownership. External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Enhancing quality assurance and testing procedures
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
VIEW POINT ENHANCING QUALITY ASSURANCE AND TESTING PROCEDURES - Mayank Jain Principal Consultant
---
Page: 2 / 8
---
External Document © 2018 Infosys Limited Introduction In today’s world, most large and mid-size organizations have opted to centralize their software quality assurance (QA) and testing functions. If you are part of any such dedicated QA and testing team, looking to learn more about the latest QA trends, then, this paper is for you.<|endoftext|>The Software World Quality Report, 2015- 2016 indicates an average expenditure of 26% in 2014 and 35% in 2015 on QA and testing. In fact, many organizations began allocating a yearly testing budget since about a decade ago, or even before. These budgets are allocated for the actual testing, the testing processes, procedures, tools, etc. But what happens if the defined processes are not implementable or the teams find them to be outdated and teams are unable to stick to these standards? The obvious outcome in such a scenario would be a decrease in QA effectiveness, increase in time taken and team frustration – all leading to lower production quality. That’s why testing processes need continuous review and enhancement, more so with newer technologies and shorter sprints (idea to production). In this paper, I have outlined seven keys ar- eas that the QA and testing function must focus on to enhance their organizational maturity and bring innovation in their day- to-day work.<|endoftext|> Seven key areas to enhance the overall QA and testing function Define quality Standardize, centralize and optimize Improve QA processes New-age testing techniques Automated testing Supporting elements –iImprove test environments and test data Metrics, dashboards and analytics 1 2 3 4 5 6 7
---
Page: 3 / 8
---
External Document © 2018 Infosys Limited Standardize Optimize Centralize Defining quality The ISO standard defines quality as “the totality of features and characteristics of a product or service that bears its ability to satisfy stated or implied needs.” The important part of this definition is the conformance to requirements and the expectations from the quality assurance function. Quality, contextually, depends on the organizational setup, business demands, and the inherent nature of business competition in the era of mobile and increased social interaction.<|endoftext|>In my opinion, the first step to improve quality should be to understand the expected level of quality. Accordingly, a decision can then be made whether to establish a dedicated testing function or simply follow a federated model. In both cases, a basic discipline to ensure the software quality processes and periodic enhancement to methodology and lifecycle, procedures, etc. should be instituted. Such discipline will ensure that products and services satisfy the stated or implied needs. Standardize, centralize and optimize As discussed, depending on the needs and expectations from the quality function, or- ganizations can choose to either centralize their quality function or not. Let’s discuss the challenges and the possible steps to address them in case the organization intends to centralize their QA and testing function in a Testing Center of Excellence (TCoE). Challenges: TCoE processes are time consuming and expensive. In addition, often times, the application development team spends a large portion of their time explaining requirements or creating various builds for QA. These challenges are magnified when an organization has multiple lines of businesses (LOBs) and lacks common grounds to leverage each LOB’s capabilities and strengths. Solution: Think of a scenario where each LOB follows similar processes! Would this not help integrate more easily? In my experience, it certainly would. So, the sequence of centralization should be to first standardize processes and tools for each LOB, then, proceed with centraliza- tion. These foundational steps will ensure optimization of idle resources, tools, tool licenses, and lower total cost of ownership. Dashboards can provide critical QA analytics here.<|endoftext|>
---
Page: 4 / 8
---
External Document © 2018 Infosys Limited Improve QA processes • QA and test maturity assessment: To baseline and improve the organizational QA capability, it’s recommended to measure the maturity of existing processes and tools.<|endoftext|>• Test governance / clear policies: Just like you cannot navigate to a new place without a map, QA teams need clear direction in term of the test methodology, how the testing lifecycle aligns with the development lifecycles, and the responsibilities of a tester, test lead and test manager. • Test management process (TMP): TMP is an artifact that can be developed at the organizational level and individual lines of business or application areas can develop their specific test strategy. For example, the strategy can outline its strategic direction on what areas would get automated, which customer facing applications would be piloted for security testing, and which applications would go mobile and be hosted on the cloud. These may be piloted with mobile- or cloud-based testing. The executive and operations committee, once instituted, should liaison between the business, application development, and the operations teams to align QA and testing methodologies with them. • Shift left and get the requirements right: It’s proven that a shift left strategy in the software development lifecycle (SDLC) helps find issues earlier. The industry is moving towards using a single application lifecycle and finding ways by which different teams can increasingly collaborate and become agile to respond to each other’s needs. Reducing requirement volatility and developing agile teams can significantly improve strained dialogues between business and IT.<|endoftext|>• True vs. hybrid – Agile and DevOps: Again, this depends on how we define quality. If the need of the organization fits well with a hybrid agile model, then, the advocacy of true agile processes would be immature. To achieve reduced cycle time and quicker time-to-market, continuous integration, continuous development and testing concepts are commonly being used. • Smoke test / build quality: Approvals, UAT support, and metric-based focus area for regression helps break silos with the development teams and the business (top down and bottom up approach).<|endoftext|>A new-age testing saga Many IT professionals often face the question – What’s next? I am sure you must have come across such situations too. To answer it, here are the latest trends in QA that organizations can leverage to reduce the risk to IT applications: Predictive analytics Service virtualization Data testing and test data management Mobile- and cloud- based testing Risk-based and combinatorial testing Infrastructure testing
---
Page: 5 / 8
---
External Document © 2018 Infosys Limited Predictive analytics Like other industries, predictive analytics and machine learning concepts are now being leveraged in software QA and testing as well. Most QA organizations accumulate huge amount of data on defects, and test cases prepared and executed. Just like Facebook can predict and show what you may like and Netflix knows what type of movies you may like, QA teams can now predict the type of defects that may occur in production or the error prone areas of an application or the entire IT landscape based on the production or past QA de- fects and failed test case information. Service virtualization In today’s world, different teams and multiple applications under the same or different programs, often, come to a point where one team cannot develop or test if the second application is not ready. In such situation, it is best to adopt service virtualization. This concept is mainly based on the fact that common scenarios can be simulated using a set of test data, allowing interdependent teams to proceed, without having to wait. Data testing and test data management (TDM) A majority of organizations have immense data issues, including data quality, avail- ability, masking, etc. In fact, the system integration and testing (SIT) and user ac- ceptance testing (UAT) teams can enhance the effectiveness of testing by leveraging various test data tools available. Apart from the tools, test data management is becom- ing an integrated part of the shared service organization. Many financial organizations across the globe have dedicated TDM func- tions to manage their test data as well as support various teams to create test data. Mobile- and cloud-based testing Mobile devices are ubiquitous these days. Today’s mobile world is not just |
Continue # Infosys Whitepaper
about smartphones or tablets, rather it is perva- sive with handheld devices in retail stores, point of service (POS) systems, mobile payment devices, Wi-FI hotspot, etc. The list goes on! In the QA world, these pose unique challenges. For example, these devices and applications need to perform at speed and in various network conditions while using different browser, operating systems, and many more such conditions. Club this mobile challenge with applica- tions and data hosted in a cloud environ- ment such as Microsoft Azure, Amazon Web Services, to name few and you have magnified the testing teams challenges by manifold.<|endoftext|>Since most organizations are not really equipped with mobile test labs, these are some areas where they can tie up with various vendors to perform mobile testing. Another trend that helps overcome these challenge is the adoption of newer meth- odologies such as agile Scrum, test-driven development, behavior-driven develop- ment, and DevOps. However, most of these methodologies demand progressive auto- mation or model-based testing concepts where testers may need to be reskilled to wear multiple hats.<|endoftext|>Risk-based testing (RBT) / algorithm and combinatorial testing RBT is not a new concept and we all apply it in almost every project, in one way or the other. However, depending on the nature of the project or applications, RBT can be tricky and risky. QA and testing teams need tools that can generate various permu- tations and combination to optimally test and reduce the cost. For instance, in mobile testing, you may come across many operating systems and browsers, hence, many permutations and combinations are possible.<|endoftext|>Combinatorial testing is another tech- niques that has gained fresh momentum in recent years and organizations can now use tools to derive an optimal set of com- binations when attempting to test a huge number of possible scenarios.<|endoftext|>Infrastructure testing The recent Galaxy Note 7 debacle costed Samsung millions. And this is not a stray incident. In fact, the list is endless, making it important to thoroughly test the infra- structure. Many organizations now have dedicated infrastructure testing teams working in the shared service model. It is recommended to review the infrastructure testing needs and ensure that the services are well aligned with the IT Infrastructure teams who provision the internal and external hardware needs such as VDI, Win- dows patches, databases, etc.
---
Page: 6 / 8
---
External Document © 2018 Infosys Limited Automated testing As per the latest QA trends, automation testing is now a necessary testing type as against being optional, 5-7 years back. Many leaders still question the value of automation – what is the ROI? How is automation directly benefiting, etc.? In my view, the key is to do it right. For instance, • Allocate automation funding for applications and not seek funding from projects to develop new automation scripts and maintain the automation framework • Automate regression testing and not functional testing • Establish application-specific regression baseline • Perform impact analysis using predictive analytics as described earlier and plan for automated testing at the release level instead of on project basis or simply, funded • Track automation ROI and coverage metrics and showcase the value of automation as compared to manual regression • Adopt and enable existing automation to take up new methodology and technology as it is related with Agile Scrum, Test and Behavior driven development, and DevOps model (as discussed in the new-age testing saga section) Test environment, data and security The test environment, test data, and the overall IT security challenges are the most agreed upon and accepted challenges. However, many organizations find it very difficult to build multiple QA environments and replicate their production processes. While conducting a process maturity assessment, I was surprised to hear that, a bank spent more than USD1 million but failed to implement a production-like environment. Part of the reason is a lack of planning and budgeting itself, key to effective testing environment. And if budgeted but not approved, such initiatives take back seat and the organization continues to solve reactively instead of proactively. In my experience, successful organizations typically promote centralization of these two functions and form test environment and test data management teams. These teams are responsible for ensuring the right data in the right environment at the right time. Both of these functions can benefit from developing an operating and engagement model that would allow them to get funding, manage service requests, and obtain the necessary access to applications, jobs and data. For security testing, there are many tools available in market these days. But it is recommended to look for tools that can integrate with application development and testing tools as well as support cloud and mobile infrastructure.
---
Page: 7 / 8
---
External Document © 2018 Infosys Limited Metrics, dashboard and analytics Metrics and dashboard concepts are not new but how the data is collected, retrieved, processed, displayed, and finally, analyzed to make informed decision has surely changed. There are many tools now in the market that can integrate with many technology platforms and drill down the capability in a very interactive manner. Some tools that are gaining popularity are Tableau, Quick View, etc. Many organizations develop in-house tools or leverage SharePoint as their metrics tool. Whatever the choice of tool, below are some key consideration that QA managers and leaders would find beneficial: • Capture and communicate the key performance indicators (KPIs) to senior management on production and QA defects, engagement feedback, cost avoidance, and application level defect density • Define project level vs. aggregated view of the metrics • For multiple departments or lines of business, apply consistent database scheme • Define standard folder structure in the available QA or test management tools • Develop integration for analytics tool • Define key metrics to track and ensure data accuracy and quality • Ensure automatic generation and analytical capability to assist in decision making • Develop QA specific predictive analytics, for example, production and QA defects data can be used to predict potential areas for rigorous functional or regression testing, an upcoming trend
---
Page: 8 / 8
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected Conclusion In summary, it’s more beneficial to know “what” is happening at the macro level rather than “why” it is happening at the micro level. While it’s important to measure perfectly but due to huge amount of QA data such as count of test cases prepared, executed, effort consumed etc., it’s more beneficial to understand the trend at high level and not the detailed statistics. This will help define the Quality in the context of an individual organization as opposed to industry standard QA definition. Such definition can then guide organization specific QA metrics to collect, new age testing types and methodology to adopt as well as any consideration towards automation, improvement in QA processes and any supporting elements such as test data and test environments build up.
Author Mayank Jain Principal Consultant
***
|
# Infosys Whitepaper
Title: The Enterprise QA Transformation Model
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
VIEW POINT Abstract With the increasing acceptance of testing/QA as an independent function; the focus on building mature testing practices has become critical for organizations, as quality has a profound effect on business outcomes. This in turn has necessitated the need for a model that can assess the maturity of the current test organization and also work as a reference model for selective improvements. In the previous paper1, we had reviewed why most conventional and QA maturity models available today fail to assess the overall test process maturity and understood the need for a comprehensive QA/ Test maturity model and its desired attributes. In this paper we shall understand The Enterprise QA Transformation Model, the answer to the need for a comprehensive test maturity model, which defines the QA Transformation roadmap and manages its implementation. It has been developed to help organizations selectively improve their testing capabilities based on key dimensions that contribute to the testing maturity and can be customized to the business environments that the organization operates in. The innovative scoring and assessment methodology of the model helps identify and understand the prevalent weaknesses and transform the testing services by leveraging their current testing capabilities and adopting the best testing practices across industries.<|endoftext|>THE ENTERPRISE QA TRANSFORMATION MODEL A solution to enhance an enterprises’ testing maturity – Reghunath Balaraman, Aromal Mohan
---
Page: 2 / 8
---
What is ‘Enterprise QA Transformation Model’ all about? Infosys has worked closely with many large enterprises spanning across industries for building and operating many mature Test Center of Excellence’s and Test Factories. In the course of these complex and diverse engagements, Infosys identified various key factors that significantly contribute to the QA service maturity. This eventually led to a detailed evaluation of these various engagement and operating models that organizations had adopted. Infosys also leveraged its expertise in Testing Services and experience in implementing Test Centre of Excellence (TCoE) for large accounts in QA organizations. The evaluation resulted in the development of the Enterprise QA Transformation Model, which was built to enable organizations to understand the current weakness and transform the testing practices by improving their current capability based on business needs. The model also helps in managing the implementation initiatives in a systematic manner. The figure 1 below gives a high level overview of the model. INTRODUCTION The importance and contribution of software systems in supporting businesses and generating revenues has risen significantly over the last several years. The volatile economic conditions, increasing pressures to control costs and improve return on investment in IT systems has forced organizations to seek processes and practices that can improve the overall benefits of better quality, faster time to market and lower costs. While the focus on quality has continued to increase, the drive to control costs has resulted in complex and diverse organizational structures. It is now widely accepted that the ability to warrant the quality of IT systems and processes establishes the success or failure of an enterprises business outcomes. Hence, it’s imperative for them to gauge the maturity and effectiveness of their QA organizations in order to plan strategic and tactical steps to improve their effectiveness and efficiency. It was at this juncture that the dire need for a reference model and framework for assessing the maturity and planning improvements to the processes and practices was widely felt by organizations. In the previous paper of this series we had discussed, the limitations of the traditional approaches to Test/ QA maturity models and the key attributes of a comprehensive QA assessment framework model1 that would meet the needs of a dynamic business environment. There was clearly the need for a comprehensive framework that addresses all dimensions of a QA organization, in a selective manner and builds varying levels of maturity for different practices, depending on the way the organization has organized its software development and testing operations. Figure 1: Key aspects of the “Enterprise QA Transformation Model” External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
Dimensions of Maturity The model classifies the four identified key dimensions of QA maturity as listed below. 1) Test Engineering dimension focuses on the software testing life cycle practices 2) Test Management dimension changes the way the independent testing projects are managed and maintain a tight integration between them and the overall project management life cycle.<|endoftext|>3) Test Governance dimension lays the foundation for mature testing services by establishing ‘n’ tier governance structure, a standardized test methodology, processes and policies. 4) Test Competency dimension helps leverage the domain & technical knowledge and address the career progression of testing professionals. These dimensions encapsulate the fundamental QA capabilities that need to be established at an organizational and project level. The Test Engineering and Test Management dimensions help an organization improve all the testing practices of a project and hence their impact is felt more at the project level. The Test Governance and Test Competency maturity dimensions have a comprehensive influence at an organizational level. Maturity Behavior As part of this model, 20 key areas, 5 maturity levels and 219 unique test practices have been defined within the four maturity dimensions discussed above. Each maturity dimension has several behaviors or the expected activity defined. A set of behaviors or observable characteristics collectively determine the maturity level for each dimension. Further, these behaviors are in turn supported with testing practices. These practices help organizations implement the key areas and exhibit the expected behaviors associated with them. This model generates a detailed map of the current capabilities which help identify the strengths and weaknesses of the current QA/ Testing processes for the organization. Measurement The model is equipped with a robust measurement framework. It has an innovative assessment and scoring mechanism which includes questionnaires and systems to measure the testing capability. The assessment includes an industry wide accepted survey, interview and reporting practices. The Solution Deployment Approach The current testing capability is assessed using the maturity model. The scope of assessment varies according to the business priorities of an organization. Based on the assessment findings, a detailed roadmap is built to elevate the organizations current QA maturity level. The assessment phase is followed with the design phase in which new practices and processes are introduced to improve the current test maturity state. Then, the implementation phase includes the selection and deployment of the new practices, processes along with a pilot execution. This is followed with an enterprise level deployment. An annual operating plan is formulated to sustain these practices which include measurement mechanisms and periodic reviews. The key implementation stages of the solution deployment approach are depicted in Figure 2. Figure 2: The Key Implementation Stages of the Solution Deployment External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
The Key Features of The Enterprise QA Transformation Model The Enterprise QA Transformation Model is a comprehensive platform for building mature and effective QA organizations. Let’s go through some key features of this model.<|endoftext|>COMPREHENSIVE ASSESSMENT FRAMEWORK The model includes a comprehensive assessment framework which comprises of a maturity map, an assessment questionnaire, scoring model and a complete set of process aids and tools that help plan and conduct assessments. Figure 2 below gives a snapshot of the scoring model. The assessment framework is designed to gather data on the behaviors that help determine the testing practice and map it to the corresponding maturity level. The assessment methodology used in this model is an IP of Infosys (Patent filed in India & US PTO).<|endoftext|>REFERENCE MODEL FOR IMPROVEMENT The model also functions as a reference framework for building a strategic roadmap for improvement. It helps in selecting, prioritizing and sequencing the improvement initiatives in a manner that is in line with the organization’s vision and the current QA capabilities. The QA organization can leverage its prior experience and knowledge gained during the maturity assessment of the organization undergoing the transformation, to recommend a series of initiatives that would lead to a superior level of QA maturity and effectiveness. SELECTIVE AND CONTINUOUS IMPROVEMENT The Enterprise QA Transformation Model gives organizations the flexibility to selectively improve its testing capabilities based on the way they operate and engage with vendors and sub-contractors. ACCELERATORS The model is supported with a suite of processes, process aids and tools to manage the improvement implementation which helps accelerate the transformation journey of QA organizations. The process framework can be customized to the business objectives of the organization’s TCoE. The tools include some industry standard tools and several proprietary tools from Infosys.<|endoftext|>Figure 3: A Snapshot of the Scoring Model External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
The model leverages Infosys’ best practices in successfully building and delivering Test Centers of Excellence’s (TCoE) for organizations’ across industries. As part of the QA service transformation |
Continue # Infosys Whitepaper
journey, Infosys sets up a governance structure, consolidates and optimizes the testing practices & processes for managing the QA organization. By adopting these practices, the QA organization would be able to improve the efficiencies of their QA teams and minimize critical issues in their business applications and transactions. This would eventually lead to building an effective Test Center of Excellence that is well integrated in the organization’s software development lifecycle. Figure 4 describes how the Enterprise QA Transformation model creates business value for an organization.<|endoftext|>Figure 4: How the Enterprise QA Transformation Model Creates Business Value? What are the Benefits of this Model How this model helps to improve the business value for an organization? Enterprise QA Transformation Model Learning from Infosys TCoE Implementations’, Infosys Testing Expertise and Thought leadership Efficient and Effective Test Center of Excellence Better Test Quality Reduction in Time-to-Market Reduced operating costs through a global delivery model Increase in revenue due to application stability Early defect detection Increase in Testing efficiency Improved testing coverage Engineering Benefits Business Impact Business Benefits External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
References 1. Need for a Comprehensive Test Maturity Model, Infosys, August 2011 The Enterprise QA transformation Model meets today’s dynamic business needs, taking into account the heterogeneous delivery structures of organizations today, as it helps build mature testing practices that help deliver exceptional quality, effectively and reliably. The model helps assure quality and maturity of the testing capabilities that would lead to efficient business operations, translating into successful business outcomes. It’s conspicuous because of its comprehensive framework that gives the ease of flexibility, the power of customization to an organization’s specific business need, plan selective improvements, its relevance in today’s complex business environments and because it facilitates in transforming the QA organization into today’s mature Test Centre of Excellence. These features set it apart from the conventional and available QA Maturity models.<|endoftext|>Conclusion External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
About the
Authors Reghunath Balaraman (Reghunath_Balaraman@infosys.com) Reghunath Balaraman is a Principal Consultant, and has over 16 years of experience. A post graduate in Engineering and Management, he has been working closely with several large organizations to assess the maturity of their QA organizations and help them build mature and scalable QA organizations, and is well versed with industry models for assessing maturity of software testing.<|endoftext|>Aromal Mohan (Aromal_Mohan@infosys.com) Aromal V Mohan is a consultant in Independent Validation Solutions Unit and has experience in Test service capability assessment, design and implementation. His areas of specialization are process models and framework. He has been involved in Test Center of Excellence implementations for various clients. He focuses on design and implementation of test process, test service governance, metrics and estimation. He holds a Master degree in Business Administration with a specialization in Information Systems.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Establish the right purpose for testing to win big in today’s digital world
Author: Infosys Limited
Format: PDF 1.6
---
Page: 1 / 8
---
WHITE PAPER ESTABLISH THE RIGHT PURPOSE FOR TESTING TO WIN BIG IN TODAY’S DIGITAL WORLD Abstract The need for quality and agility across all enterprise offerings – a top priority in this digital world – underscores the need for robust software testing. Over the past few years, software testing has undergone significant transformation. However, to ensure transformation delivers relevant outcomes, organizations must first understand and redefine the purpose of quality assurance across all their offerings. This paper outlines the importance of defining the right purpose of testing across four critical dimensions. It also explains how such an approach will help organizations increase the overall value and quality in the product and software delivery lifecycle.<|endoftext|>- Bodhisattwa Mukherjee Test Analyst, Infosys Validation Solutions
---
Page: 2 / 8
---
External Document © 2019 Infosys Limited Introduction The rise of new technologies is creating a world driven by the need for real-time and relevant solutions that make customer lives easier. In fact, most of the innovation we see today is about making breakthroughs to support this demand. As customer expectations increase, organizations are under pressure to deliver more at superior quality within ever-shrinking timelines. Thus, to sustain their edge, enterprises are looking at optimizing their software testing lifecycle to become lean and agile in order to support quality assurance and shorter time-to-market. But first, they must evaluate what their purpose of testing is. Without such understanding, initiatives to amplify value and deliver positive customer experience will remain limited in efficacy.<|endoftext|>External Document © 2019 Infosys Limited
---
Page: 3 / 8
---
External Document © 2019 Infosys Limited What is the purpose of testing? Testing has a significant role to play in the software industry. It is no longer confined to ensuring that specific software components work. Today, testing is about enabling understanding and clarity in the end-to-end lifecycle – from origin of business requirements to customer acceptance. However, as software becomes complex, there is a steep rise in the number of checks to be conducted, making software assurance impossible through only human intervention. Thus, it is imperative for companies to augment the entire testing process with selective accelerators like automation. Establishing a balance between human testing and automated testing is the key success factor here. Moreover, the strengths of each can only be fully leveraged when there is clarity on the overall purpose of testing. In order to achieve this, we recommend defining the right purpose across four critical dimensions, namely test maturity, strategy, accelerators, and agility. A balance between these four factors will enable robust and outcome-driven testing for business success. These four dimensions are further described below: 1. Test maturity It is extremely important for organizations to understand their current testing maturity if they want to adopt digitization and meaningful innovation. They should identify the purpose of testing and introduce clarity into its objectives through effective assessments. This involves defining a clear vision and future state that reflects how the organization views the overall role of testing as well as its role in the digitization journey. Thus, end-to-end traceability from business goals to IT goals to testing goals form the winning criteria for successful testing assessments. Moreover, organizations should also focus on change management during digital transformation to ensure enterprise-wide success. The strategy outlined below can help define the purpose of testing by first understanding the overall objective. This assessment strategy can help organization identify potential and current gaps as well as the corresponding next steps to solve them. The differentiator of this strategy is that it provides end-to-end traceability across all phases, thereby aligning with the overall purpose of testing.<|endoftext|>End to End traceability of findings to gaps to recommendations to initiatives Assessment Initiation – Planning and Strategy Collect response to questionnaire, interview different business functions and review testing process Analysis of current state Testing process to identify Key findings and Business Impacts Design Testing Vision, Strategy and Target state Gap Analysis Propose Recommendations Define Initiatives, Roadmap and Next Steps 1 2 3 4 5 6 7 Findings Gaps Initiatives Next Step RoadMap Vision Fig 1: An assessment strategy to effectively identify current testing maturity
---
Page: 4 / 8
---
External Document © 2019 Infosys Limited External Document © 2019 Infosys Limited 2. Strategy Increasing customer expectations, aggressive product release timelines, competitive product margins, and steep compliance measures present a new type of challenge for nearly every department in an organization, namely, quality assurance. Addressing this calls for extreme agility, value-based delivery and increased efficiency in operations. In this new era of quality assurance, testers can no longer manually assure all the functional and non-functional points of a software product or solution. In fact, 3. Accelerators Testing accelerators like automation and soft tools are now being viewed very differently as compared to the past few years. Previously, automation meant using software capabilities to perform repetitive tasks with a tester’s partial attendance to monitor the process. Now, automation has transformed to become more structured and guided with a definitive purpose. It has moved beyond performing repetitive tasks and focuses on using software intelligence for complex tasks, which may even require decision-making capabilities. Moreover, the need for extensive human this is a key factor driving transformation within testing strategies. Nowadays, testers are moving away from traditional phased ‘test-all’ approaches all to more agile and ‘purposeful’ testing. While many organizations have not explicitly embarked on such agile journeys, it is important to infuse a certain degree of agility in testing by first understanding the purpose and overall objective of testing. Hence testers should have a good understanding of technology along with sound domain knowledge in the current business landscape. This intervention for close monitoring has reduced to a large extent and is required only in specific situations beyond the software capability. Simply automating every process can be time-consuming, costly and may not even yield the expected outcomes. Instead, it is important to understand the purpose of automation and the value it brings. Once there is clarity on this, the next step is to generate a heat map of all the automatable processes, identify high impact zones and categorize and prioritize the most important ones for automation. The heat map must clearly reflect time-consuming tasks, actual or balance of techno-domain knowledge will enable agility in an organization’s quality assurance journey. We recommend that organizations further optimize their testing strategy by incorporating twelve selective focus points as mentioned in Fig 2. Once there is clarity and understanding of the relevancy of testing in the overall business context, these twelve points can be implemented through various means like test accelerators, bots, new operational models, etc.<|endoftext|>potential choke points, friction zones, and processes having high complexity. When automation is leveraged efficiently in this way with a definitive purpose, organizations can derive positive impact on speed, cost, time, and agility.<|endoftext|> Fig 3 depicts the simplified architecture of an application landscape reflecting the value generated through automation across different layers. The key success factor here is identifying the correct purpose for automation across different layers, thereby crafting an effective automation strategy that strikes a right balance in each layer.<|endoftext|>Fig 2: Twelve selective points to improve the testing strategy for robust quality assurance
---
Page: 5 / 8
---
External Document © 2019 Infosys Limited Apart from the above benefits, automation also helps organizations leverage robotic process automation (RPA) for higher ROI in areas like vendor invoice management, claim management (administration and processing), customer enrollment management, billing management, tax calculation system, etc.<|endoftext|>Fig 3: Architecture of an automated application landscape External Document © 2019 Infosys Limited
---
Page: 6 / 8
---
External Document © 2019 Infosys Limited 4. Agility In a fast-paced customer-centric world, organizations want to be more agile, particularly in their product development lifecycles in order to increase efficiency. However, traditional practices of maintaining independent teams across the product lifecycle with high manual intervention in each step cannot support agility. The shift towards agile product development necessitates having a collaborative environment for development, operations and quality assurance for lean end-to-end processes. Armed with this purpose, organizations that have |
Continue # Infosys Whitepaper
already embarked on digitization journeys with the first layer of efficient and agile testing are now looking to leverage effective test automation and are embracing the practices of DevTestOps DevTestOps not only brings in a collaborative culture but also accelerates solution delivery, innovation and rapid cyclic feedback. It calls for specialized roles like quality engineers who work closely with quality analysts, developers and the operations teams. Using DevTestOps with the right purpose will help organizations realize tangible business value through improved time-to-market, greater productivity gains, predictive fixing of software bugs, lower release cost, and shorter turnaround time for solution delivery to name a few.<|endoftext|>
---
Page: 7 / 8
---
External Document © 2019 Infosys Limited Conclusion Today, customers are increasingly dependent on technology to make their lives easier. Thus, the role of software testing is becoming more prominent with the demand for high quality products. While the future of software testing may appear challenging, the transformation over the past few years provides immense opportunities for organizations to boost their efficiency. To derive maximum benefit, it is important for organizations to re-look at their purpose of transformation and, more importantly, the purpose of software testing. The winning formula is to define the right purpose for testing across four critical dimensions – test maturity, strategy, accelerators, and agility – based on their current position in the transformation roadmap. These parameters will provide deep visibility into the transformation journey so businesses can achieve the expected outcomes.<|endoftext|>External Document © 2019 Infosys Limited
---
Page: 8 / 8
---
© 2019 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Fighting retail shrinkage through intelligent analysis and validation
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
WHITE PAPER FIGHTING RETAIL SHRINKAGE THROUGH INTELLIGENT ANALYSIS AND VALIDATION - Naju D. Mohan and Jayamohan Vijayakumar Abstract While striving to achieve greater profitability in a landscape characterized by aggressive competition, retailers keep racking up losses due to retail shrinkage. In this white paper we take a look at the various sources of retail shrinkage and discuss the benefits of establishing effective validation for loss prevention systems. The paper also reveals how loss prevention Quality Assurance (QA) solutions can reduce retail shrinkage. We also outline how the right testing approach and techniques can help avoid loss.<|endoftext|>
---
Page: 2 / 8
---
In their effort to stay ahead of competition, retailers are employing innovative measures to minimize operational costs so that they can increase their profitability. Today’s retail dynamics are impacted by a volatile global economic environment and diminished consumer spending. These issues are pushing retailers to evaluate various techniques they use to reduce losses that adversely affect the bottom line. Retail shrinkage or shrink is the fraction of inventory that is lost in the supply chain from the manufacturing unit to the point of sale (PoS). Analysis of enterprise information, collected throughout the supply chain and stored in data warehouses, along with historical transaction data is used to identify and qualify loss data. However, the success of loss prevention techniques depends on the quality of data used for deriving these analytical reports. To ensure the quality of data, retailers need to develop benchmarks for each relevant business parameter, which can help them identify inventory loss and flag anomalies in their operations. These threshold levels must be managed centrally with business rules developed specifically to provide systematic alerts for corrective action. A proper validation of the loss prevention system along with an effective approach for their business intelligence solutions, enables retailers to obtain a holistic view of their enterprise data and make smart decisions to fight retail shrinkage.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
What causes retail shrinkage? Inventory shrinkage can be attributed to several sources such as shoplifting, employee theft, vendor fraud, administrative errors, and other unknown reasons. According to the National Retail Security Survey Report 20111 the five major sources of retail shrinkage in order of frequency are: • Employee theft (44.2%) • Shoplifting and organized retail crimes (35.8%) • Administrative errors (12.1%) • Unknown reasons (5.3%) • Vendor fraud (4.9%) Total retail shrinkage amounts to $35.28 billion in the United States alone.<|endoftext|>To understand the nature of inventory loss better, let us examine the various causes of retail shrinkage in detail.<|endoftext|>electronic surveillance and automated alert mechanisms to deter shoplifters. Advanced electronic surveillance systems offer analytical capabilities that work with captured video footage to proactively alert loss prevention (LP) executives in real time.<|endoftext|>Administrative Errors Pricing mistakes like incorrect mark- ups and mark-downs can lead to large losses, especially over time. This area has seen some improvement from the previous years. Three segments which recorded the highest retail shrinkage due to administrative errors are jewelry and watches, home center, garden, and household furnishings.1 Vendor Fraud Vendors can steal merchandise in the course of the supply chain, either during buying or fulfillment. They contribute the least to the total volume of retail shrinkage. Fraud prevention strategies include delivery audits while receiving merchandise at the warehouse or when the customer receives the merchandise in case of direct fulfillment.<|endoftext|>Other sources of shrinkage are not related to store inventory. These include cash losses, check losses and credit card losses. Return and refund fraud also contributes to retail shrinkage. In most such cases, the shrinkage does not leave a clear audit trail. Hence the LP management is required to deduce the root cause of shrinkage.<|endoftext|>Employee Theft Employee theft continues to be the most significant source of retail shrinkage in the US. Among the various retailer categories, convenience stores or truck stops suffer the most from employee theft.1 This involves ‘sweethearting’ where a cashier unofficially gives away products to family or friends for free using fake scan or offers discounts. Cashiers can also slide items into the cart without scanning it at the PoS system. Other instances of employee theft are: voiding items from sale, reducing the price without authority, under-ringing (selling products at a lower price while collecting the full amount), no ringing, ringing fraudulent return, and using staff discounts inappropriately. Conventional theft detection mechanisms include electronic surveillance, audits, and review of till receipts followed by investigation.<|endoftext|>Shoplifting and Organized Retail Crime (ORC) Shoplifting and ORC is the second largest source of shrinkage in the US. Craft and hobby stores, accessories and men’s and women’s apparel are the top three retail categories worst hit by retail shrinkage.1The trails of organized retail crime can be found supporting terrorism and money laundering. This problem warrants immediate attention and a focused approach. Most retailers use Table 1 – Sources of retail shrinkage Shoplifting and ORC, 35.8% Administrative, 12.1% Vendor Fraud 4.9% Unknown, 5.3% Employee Theft, 44.2% External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
Leveraging business analytics for loss prevention Loss prevention is always a moving target. The key is to stay on top of the changing LP requirements and be able to respond to them faster than the time the contributing factors take to adapt to the circumstances – be it shrinkage or salvage. The key objective of a business analytics (BA) program is to empower business owners, the finance team and LP management to take timely informed decisions. The overall success of this program depends on the strategies adopted for data warehousing and analytics, the quality of the underlying data used for analytics and the granularity of the data. While retail shrinkage affects all types of retailers, certain categories are affected at a disproportionally higher rate. The strategies each retailer needs to adopt depend on and vary primarily with the business model, line of business, sources of retail shrinkage, and major contributors to the loss. Usually, the BA strategy for loss prevention can be classified at a high level into two distinct areas – operational and strategic.<|endoftext|>Operational LP Analytics Operational LP analytics aims at identifying the retail shrinkage and its sources for shorter periods. This provides near real- time view of the shrink and salvage. The method aids the LP management to identify shrink quickly, thereby providing them the ability to respond faster to the situation and prevent further loss. LP data average out over longer periods; hence for the method to be effective, the identification must be as close to real time as possible.<|endoftext|>Not all categories and items are treated equally in loss prevention methodology. Products such as shaving items, clothing accessories, cheese, meat, perfumes, etc., fall into a high-risk category. Departments like cosmetics are more prone to retail shrinkage than fresh produce. Operational analytics report at the category level, providing the ability to drill down to sub-categories and even to shrinkage at the level of stock keeping unit (SKU). Comparative reports with other sub-categories or averages enable the prevention strategy to be devised at the department level or SKU level. For example, when a new range of perfumes is introduced in a store, operational analytics can provide a view of the retail shrinkage for this range, or for a product within the range, for a specific time period, compared with other products in the range. Operational analytics reports can provide the average shrink for that department or sub-category or even industry averages. This enables the department managers to act quickly and prevent loss.<|endoftext|>Another dimension of operational LP analytics is based on the source of shrinkage. Retail shrinkage can take place right at the aisle where shoplifters sweep out or at the till where an employee or a member abuses a discount or in a vendor’s unit where fraudulent transactions are conducted. Horizontal analytics offers a different perspective to loss prevention thereby enabling the adoption of more comprehensive and efficient strategies. For example, reports on void transactions at the cashier level, price exceptions at the tills, reports at the return tills at cashier |
Continue # Infosys Whitepaper
or member level, reports against the inventory booked and bill of landing (BOL) for a vendor, etc., work both as a deterrent to malicious activity as well as an enabler for shrink identification.<|endoftext|>In operational analytics, reduced time to market is the key factor. It is critical to work out ways to cut short the requirement- to-validation cycle. It is observed that the entities involved in the sources and strategies of theft adapt quickly to the new security measures, identifying ways to break the current system. The ability of IT to respond to this continuously elevating challenge in relatively shorter windows, determines the success of this program and the ability to stay ahead of the miscreants. Strategic LP Analytics Strategic analytics help evaluate the effectiveness of the LP program as a whole. These reports provide executive snapshots into retail shrinkage data with drill-down capabilities. The objective is to empower executive management to make informed decisions that influence the corporate loss prevention strategy, return on investment (ROI), LP budget allocation and formulation of corporate policies.<|endoftext|>Some examples of the types of reports obtained through strategic LP analytics are: trend analysis and comparative analysis comparing various time periods such as year-on-year (YoY) shrinkage for a department, comparing the same quarter, store YOY, region YOY, etc., types and locations of warehouses (mall or strip), merchandise-specific classification (such as life-time return), data comparisons before and after changing the LP strategy and more.<|endoftext|>Strategic analytics enhances the effectiveness of the overall LP strategy against industry averages, and helps steer the course at the executive management level. Validation for loss prevention system plays a vital role in ensuring quality and reliability of data, and accurate reporting. The level of analytics required in a specific retail setup is determined by several factors that are in play such as the line of business, corporate policies, etc. Further considerations for analytics can be cultural diversity in the operating markets and the climate and economies in which retailers operate.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
Validation for loss prevention system Developing a comprehensive loss prevention analytics validation strategy early in the development cycle is critical to the execution of a successful loss prevention program. The validation strategy is defined based on the corporate loss prevention strategy, the data warehouse design, key performance indicators (KPIs), and the business intelligence reports identified for loss prevention. The analytical methods used for loss prevention need to be qualified and validated. The key process indicators, as part of the analytical methods, are identified and the exact business rules to be validated are defined. The key process indicators included in the test strategy must be examined to establish threshold values based on historical and real-time data for operational and strategic reporting. Validation for loss prevention system helps the retailer not only to address their current issues but also predict any issues that may arise in the future. Therefore, decision-making must be open to adopting different validation techniques for operational and strategic loss prevention analytics.<|endoftext|>Strategic LP Analytics and Validation The major strategic challenge in creating a permanent loss prevention analytical platform is the uncertainty involved in the retail industry. Retailers continue to improve their LP program to remain competitive and protect their brand. However, this implies that until the LP analytical platform is implemented and the Business Intelligence (BI) reports and KPIs are in place, the retailer is unable to confirm the accuracy of the analytical data to help the company meet its long-term profit and growth plans.<|endoftext|>Strategic validation for loss prevention system must focus on the following factors: • Integration of BI with loss prevention techniques • Design of a scalable Enterprise Data Warehouse (EDW) and Supply Chain Integration Operational LP Analytics and Validation The constrained global economy has led to reduced operating budgets. Retailers are today seeking immediate business benefits from monitoring and analyzing loss prevention data. The focus is on tapping into the large amount of high-quality cross-functional operational data and uncovering suspicious activities. This helps retailers discover problems pertaining to fraud or retail shrinkage and extract actionable insights.<|endoftext|>The operational validation for loss prevention systems needs to focus on the following factors: • Reduced time to market for the implementation of LP program through automated validation • Continuous integration validation of incremental features in the LP program through reusable regression suites Retailers across the world are using BI to strengthen their loss prevention systems. Let us see how a pharmacy chain in North America used loss prevention QA solutions to fight retail shrinkage.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
Case study Implementing Integrated EDW and BI Reporting for Effective Loss Prevention Customer: A leading pharmacy chain in North America decided to replace the existing semi-automated loss prevention system with an integrated EDW and BI reporting solution to enhance their LP data analysis and reporting capabilities. The end-user group was the loss prevention finance department.<|endoftext|>Business Context: The existing system was based on fragmented data sources and relied on multiple satellite databases or flat files with multiple hops/layers. This involved significant manual intervention and effort in data extraction, cleansing, transformation, and report generation. The inventory of reports was extremely large and redundant. It provided merely different views of the same data, and lacked drill- down and drill-across capabilities. The existing system failed to track action items on the basis of data reported. Analysis of the ‘as-is’ state of the system revealed the opportunity to consolidate the existing inventory of reports to one- third of the current size. This would allow the company to reduce the maintenance costs for the existing reports and business process refactoring. The new system needed to be based on a platform with EDW as the back-end and Oracle Business Intelligence Enterprise Edition (OBIEE) as the front-end. The future- state solution envisaged robust analytics and reporting to increase data visibility and support enterprise level BI goals for loss prevention. Loss Prevention QA Solution: The loss prevention QA strategy was defined considering the nature of the project and in-line with the overall loss prevention strategy for the customer. The QA team was engaged in stages ranging from requirements and design validation to data source consolidation and retirement strategy definition. This helped identify the high risk areas and build a risk-based test approach with optimal time to market.<|endoftext|>An automated test approach was adopted for data warehouse (DW) testing using proprietary in-house solution accelerators for Extract Transform and Load (ETL) testing which greatly reduced the regression test effort. This was the key differentiator for ensuring test coverage and reducing the time to market. Data quality testing, system scalability and failure recovery testing were performed to ensure that the DW solution was sufficiently robust to accommodate future enhancements as well.<|endoftext|>BI report testing was conducted based on the KPIs/threshold identified for the loss prevention strategy. The QA strategy called for automated testing using an in-house open source-based tool that reduced the execution time for test cycles and formed the foundation for the regression pack. This enabled the regression for future releases to be conducted in comparatively smaller timeframes.<|endoftext|>Enforcement of role-based access to data and data restriction by user on a need- to-know basis were ensured by security testing. Overall, the loss prevention QA solution enhanced the test coverage to almost 100%. This enabled early detection of loss and ability to prevent it, thereby saving an additional $1.5 million every year. The annual maintenance and support costs were reduced by $60,000.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
End notes 1 National Retail Security Report 2011 by Richard C Hollinger and Amanda Adams Conclusion In the current market scenario the ability to optimize profitability by reclaiming leaked revenue allows retailers a greater opportunity to enhance profits. Today the ability to reduce retail shrinkage is more critical than ever. However, the ‘one-size-fits-all’ strategy does not work in loss prevention. Loss prevention programs need to be tailored to address the challenges of the retail category or department based on the impact and sources of loss. The ability to predict retail shrinkage proactively, identify it and react as soon as anomalies surface is the key determining factor for the success of the loss prevention program. IT, the key enabler for this business objective, plays a central role in empowering business with innovative solutions and reduced time to market. Automation, innovation and re-use are the three pillars to achieve effective loss prevention.<|endoftext|>External Document © 2018 Infos |
Continue # Infosys Whitepaper
ys Limited
---
Page: 8 / 8
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Big data testing for a leading global brewer
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
WHITE PAPER MOVING FRAGMENTED TEST DATA MANAGEMENT TOWARDS A CENTRALIZED APPROACH Abstract Test Data Management (TDM) ensures managing test data requests in an automated way to ensure a high degree of test coverage by providing the right data, in the right quantity, and at the right time, in non-production environments. Automated TDM service facilitates test data management across test environments through a structured approach of data subsetting, cleansing, gold copy creation, data refresh, and sensitive data masking. Typically, a centralized TDM system with well-defined processes is more effectual than the traditional manual or decentralized approach, but in some cases, a decentralized approach is adopted. This paper takes a deeper dive into the considerations for the centralization of TDM processes within enterprise ITs.<|endoftext|>
---
Page: 2 / 8
---
Introduction In most organizations where TDM is at its infancy, test data-related activities are done by the individual project teams themselves. There will not be a dedicated team identified or process defined to handle test data requests. Such projects with a primitive TDM approach possess several drawbacks: • Lack of a defined ownership for the test environment and test data setup: results in unintentionally losing the test data setup or data overstepping • Unavailability of data setup for testing end-to-end scenarios: Lack of data setup between inter-dependent and third- party applications • Lack of referential integrity defined in the databases: Absence of primary, foreign, relationships defined in the database makes it difficult to identify related tables and generate the correct test data set • Insufficient data available for performance load testing: Manually generating bulk data is a tedious task and less feasible • Increased number of defects due to incorrect test data: Leads to re-work and losing time unnecessarily analyzing issues caused due to incorrect test data used for testing • Outdated test data in QA database: Periodic refresh of test data does not happen from production • Inability to provision data since data is unavailable: Lack the mechanism required for generating synthetic data • Risk of exposing sensitive data to testing teams: Sensitive fields need to be masked before provisioning for testing • Multiple copies of data: Storage costs can be reduced by maintaining required gold copies and refreshing and reusing gold copies after major releases Having a well-defined practice for handling all the test, data-related, requirements across all non-production environments in an organization is the essence of TDM. Aimed to address all the above stated issues, it will bring in more control and make TDM more effective.<|endoftext|>Based on the TDM requirement type, organizations can opt for either a de- centralized or a centralized approach. This paper gives a detailed view of both approaches and highlights how the centralized approach is more efficient and beneficial.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
Centralized TDM deals with consolidating the test data provisioning for all non- production environments across the organization. It provides a systematic approach to analyze and provision test data.<|endoftext|>Pros • Well-established TDM team with a workflow-based mechanism for managing test data requests • Reduced latency in provisioning test data with quick turnaround time • Automated end-to-end approach with tools and processes • Reduced infrastructure cost by storing only the required data for provisioning in gold copy database • Reduced risk of incorrect test data, resulting in lesser defects • Resolution of data overstepping issues by the TDM team • Periodic refresh of the gold copy makes the latest data available for testing by the QA team • Reusable masking configurations and test data generation scripts provides quick turnaround time • Easy handling of complex end-to-end test scenarios that require data setup across heterogeneous data sources having federated relationships through a centralized test data managment • Creation of bulk data which is relationally intact for non-functional testing requirements is achieved using automated solutions • Varied techniques available for creating synthetic data in scenarios where source data is not available for provisioning Cons • Considerable time and effort is required to consolidate the TDM across various portfolios • High knowledge acquisition effort required to understand the different application data models • Sporadic bottlenecks and dependency on the TDM team in case of high workload from all LOBs Centralized TDM External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
Decentralized TDM It is not necessary that all applications in different portfolios in the organization’s landscape have to be covered under the consolidated TDM umbrella. There are instances where some applications can follow the de-centralized TDM approach. This is mostly determined by the level of integration between the applications, technologies supported, data sensitivity, environment constraints, etc. For example, data in HR, infrastructure applications, etc., may be independent and not related to marketing, sales, inventory, or corporate data. These systems, hence, can adopt a de- centralized TDM approach and need to be handled outside the centralized umbrella.<|endoftext|>Pros • Minimal effort required to set up TDM for individual applications • Good understanding of the respective application data models, which makes the team capable to address the test data requests quickly Cons • Multiple copies of data without owner- ship because individual teams store separate copies of production data. • Unmasked sensitive data in non- production environments can lead to a security breach • Less uniformity in standards and pro- cesses • Increase in data overstepping issues • Minimal automation may be present with lack of coordinated processes • Limitations in setting up data across multiple data sources due to decentral- ized systems. Data set up in one ap- plication may not be in sync with other inter-dependent applications Sales Corporate HR Infrastructure Inventory Marketing Independent TDM Consolidated TDM Centralized TDM implementa- tion approaches Primarily, there are two approaches for implementing centralized test data man- agement within an organization: • Big Bang approach: In this approach, all major applications under the TDM scope in the organization are identi- fied, test data requirements across applications are analyzed, and gold copies for these applications are cre- ated at one go. A TDM team is set up to address test data needs for all the applications. This approach will take considerable time for the initial setup and knowl- edge of the application stack across the organization’s portfolio is a must. Another key challenge with this ap- proach is keeping up with the database (DB) changes happening in production during the initial setup • Incremental approach: In this ap- proach, based on the business requirements, TDM is established for an application or a prioritized set of applications. Separate test data management implementations will be carried out which can be progressively integrated. The TDM team will address the test data needs as soon as the gold copies for the applications are set up.<|endoftext|> In this approach, TDM is more manage- able, and can reap early benefits. TDM set up for smaller set of applications takes lesser time compared to the Big Bang approach.<|endoftext|>A phased approach for TDM implementation Centralized and automated test data man- agement implementations can follow a phased approach. Each stage has a defined set of activities to achieve the goals and these stages are: • Analysis • Design • Implementation • Steady State From the initial assessment phase, it moves to a stabilized stage, expanding TDM services to other domains and portfolios in the organization, and working for continu- ous improvements on the way.<|endoftext|>The timelines proposed in the diagram are highly indicative. Time duration for each phase will depend on factors like: • TDM requirement complexity • Number of portfolios or applications involved • Knowledge or understanding about the application landscape or database models Figure 1: TDM Landscape External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
Figure 2: Phased TDM Approach When to go for centralized TDM? • Applications or technologies used are mostly compatible with tools in the market • Scope of TDM for applications across various portfolios continue throughout the life cycle of the application • Incoming T |
Continue # Infosys Whitepaper
DM requests for application or application clusters are fairly high • Technologies are widely supported and not disparate • High number of inter-dependent systems which require data set up across systems for end-to-end testing When to go for decentralized TDM? • The nature of portfolios or departments within the organization are highly de- centralized • A specific TDM process is required for a prioritized set of applications within a short span of time • Scope of TDM is limited within the project and does not continue after the project is complete • Disparate or obsolete technologies used in the project are not supported by common TDM tools • Limited number of dependent / external applications • Need for test data provisioning is very low and the requests flow is manageable Common TDM challenges and resolutions 1. Inconsistent data relationship Well-defined data relationship between database objects is a key factor for data subsetting, masking, and data provisioning. It is often observed that in case of legacy applications, relationships are not present in the database layer. The business rules and logical constraints may be applied at the application level, but will be poorly defined at the database level. Logical database model architectures may not be available in most cases.<|endoftext|>Impact • Data subsetting, data masking, and data provisioning get affected • Data integrity will not be maintained Resolution • Understand the application and database structure, relevance of tables, and how they are related with help of SME / DBA • Analyze and understand the database structure using data model artifacts • Validate the logically-related entities and confirm with business analyst 2. Unclear test data requirements Teams requesting data sometimes lack information aboutwhich data sources would have the related data that needs to be set up. In some scenarios, test data requirements can be very complex, like for testing an end-to- end scenario with data spread across multiple databases or with data spread across tables.<|endoftext|>Impact • Inaccurate test data Resolution • Understand the requirement from QA perspective • Understand the database entities involved and the relationships 3. Lack of application knowledge System or application knowledge, especially the data sources under the TDM scope, is a prerequisite for the TDM team. If teams possess a limited knowledge about the application, External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
it will result in writing incorrect test cases, raising ambiguous test data requirements, and finally, provisioning inaccurate data.<|endoftext|>Impact • Inaccurate test data • Increased defects due to incorrect test data Resolution • Understand the application with the help of SMEs • Understand the database entities involved and the relationships 4. Corrupted gold copy Most projects will have a gold copy database available from where data will be provisioned to the lower environments. If the gold copy is not refreshed periodically, or the data in the gold copy has been tampered with, it can cause issues while provisioning data.<|endoftext|>Impact • Inaccurate test data Resolution • Periodically refresh gold copy database • Restrict access to gold copy database 5. Data overstepping If the same set of test data is used by multiple teams for testing, it can lead to conflicts and the test results will not be as expected.<|endoftext|>Impact • Affects test execution • Incorrect test results • Rework in test data provisioning and test execution Resolution • Data has to be reserved • Centralized TDM team can handle the test data requirements 6. Identifying correct sensitive fields and masking techniques While masking any application database, it is important that the correct sensitive fields are identified for masking. Also, what is important is that relevant masking techniques are applied to these fields. For example, email id should be masked in such a way that the email id format is retained. Otherwise, while using the masked email id, it might break the application. Another point to consider is while masking the primary key columns, the masking has to be consistently applied for the child tables also where the primary key columns are referenced.<|endoftext|>Impact • Data inconsistency across tables • Unnecessary masked data Resolution • Identify sensitive fields belonging to the category PII, PHI, PCI, financial data, etc.<|endoftext|>• Apply relevant masking techniques that will preserve the format of the data Best practices Some of the best practices that can be adopted while implementing test data management processes in projects are listed below: • Automate TDM processes with reusable templates and checklists • Improve test data coverage and test data reuse by provisioning and preserving the right data • Analyze TDM metrics and take corrective actions • Data refresh to gold copy can be automated using scripts • Batch mode of data masking can be implemented to improve performance without exposing sensitive data to testing teams • Test data can be used for configuring the masking rules, which can be replaced with production data for actual execution. Thus, production data is not exposed to the execution team • Reusable configuration scripts for masking similar data (for example – similar data for a different region) • Developing automation scripts to automate any manual TDM-related activities • Developing data relationship architecture diagrams for the most commonly used tables for provisioning which can be used as a reference External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
Summary Reduced cost and improved management with faster time-to-market are the key points for any successful program. Central- ized and automated test data management provides an organized approach in man- aging test data requirements across the organization in a better and more efficient way. Only the required masked, subset- ted, and reusable data sets, are stored as gold copies centrally, which are used for provisioning by the testing teams. Most of the TDM tools available in the market offer web-based solutions, which act as a single interface for both the testing and provi- sioning teams. Testing teams can place the test data request and provisioning teams can address the request from a single por- tal. All test data requests are tracked using a single solution. A centralized, automated TDM system with streamlined processes introduce increased accuracy and predict- ability to the entire testing process. Imple- menting centralized test data management is certainly beneficial over the de-central- ized approach.<|endoftext|>Glossary Acronym Definition TDM Test Data Management PII Personally Identifiable Infor- mation PHI Personal Health Information PCI Payment Card Information SME Subject Matter Expert PoC Proof of Concept Case study – centralized TDM for a leading pharmacy client Overview The client has a complex IT landscape with data spread across multiple portfolios including marketing, sales, corporate, pharmacy, and supply chain. Some of the applications across the portfolios have a federated relationship with related data. The TDM service engagement requirement was to establish a well-defined TDM process and governance, which will address all the test data-related requests for the projects under different portfolios, gradually expand the TDM services to newer portfolios, and finally consolidate under the same umbrella.<|endoftext|>Problem statement • Identify the test data and data masking requirements in different portfolios and application databases • Perform gap analysis for the existing TDM processes • Establish a defined test data management process and governance • Implement an automated TDM process using the right tools • Test data provisioning for functional, automation, and performance testing teams • Metrics-based approach for the evaluation of test data management implementation Challenges • Complex IT landscape with heterogeneous data source types • Lack of defined test data management processes / strategy • Manual TDM activities for data subsetting and masking • Lack of integrated data across systems • Sensitive data being moved to a non-production environment without masking • Huge cycle time for generating test data, impacting test execution schedules Solution approach • Established a centralized TDM team to provision test data for functional and non-functional testing • Deployed a web-based, self-service tool for the testing teams to place the data request and provisioning • Masked data is provisioned to testing teams ensuring compliance to PIPEDA (Personal Information Protection and Electronic Documents Act) • Established automated TDM processes and capabilities across portfolios • End-to-end testing made easy by synching up test data across interdependent applications Benefits / value-adds • 20% reduction in test data provisioning cycle time |
Continue # Infosys Whitepaper
• Production data not exposed to testing teams • Repository of reusable masking and test data generation scripts • Automated TDM services reduced test data related defects to zero resulting in quality deliverables Suply Chain Pharmacy Corporate Sales Marketing Legend Interdependent Data TDM Service Rollout Map 2 4 6 8 10 12 14 16 18 20 Portfolio Period (Months) External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
Seema Varghese is a Technical Test Lead with Infosys, having 11 years of IT experience, including leading teams and developing testing expertise in different domains (retail, pharmacy, and telecom). She has worked in data migration and data warehouse testing projects. She also has experience handling TDM and data masking projects.<|endoftext|>© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: A framework to increase ROI through quality data
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
WHITE PAPER A FRAMEWORK TO INCREASE ROI THROUGH QUALITY DATA Kuriakose K. K., Senior Project Manager
---
Page: 2 / 8
---
External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
The perception of data across organizations is changing. Data is no longer just one of the components of business. It has turned out to be ‘the business.’ Today, information is viewed as a lifeline by senior management and by many other departments for decision making, customer interactions, and optimum operations. An organization’s success heavily depends on how it is able to understand and leverage its data. Unfortunately, there continues to be a high amount of inaccurate data within enterprises today, despite developing multiple solutions/ systems to counter the same. Nowadays, a major portion of data for decision making is collected from external sources through a variety of channels. This often results in poor data quality, which has a negative effect on an organization’s ability to take decisions, run business, budget, market, and gain customer satisfaction. Organizations which fail to control the quality of data, is unable to sustain in today’s data-centric world. Any data-driven effort needs to have strong focus on data quality, which implies that organizations looking for success are mandated to prioritize data accuracy and accessibility. It is essential for them to interact with consumers, vendors, suppliers, and third parties in countless ways, by exploring diverse new methods of communication. Information is the key for areas like inventory management, shipment, and marketing. The objective of this paper is to analyze principle challenges with data across few key business functions and discuss a framework which can bring down the erroneous data that is getting pumped in and out of an enterprise system.<|endoftext|>Marketing: If we have accurate information on who our customers are and what their needs are, we have hit gold in Marketing terms. This is more easily said than done as today we neither have an accurate nor enough information about customers. We can surely gather information about customers from various sources like- website, physical store, mobile application, call center, face to face, catalogues etc. But, one can never be sure if they are same or different set of people consuming your services. There’s no surety of the information being accurate as most of these channels accept data directly with limited to no validations. Now let’s assume that we have done all possible validations and found out the target group of customers but there’s still no defined method of reaching them-- should it be through emails, telephonic conversations, social media, physical address etc. Let’s drill this down into one of the mediums as--physical address. The catch- the customer has many addresses like for credit card, savings bank account, driving license, and for office purposes.<|endoftext|>Shipping The current status of shipments is constantly added to enterprise systems through shipping vendors like DHL Express, DHL Parcel, United States Postal Service (USPS), United Parcel Service (UPS), FedEx, Canada Post, LaserShip, OnTrac, and Hermes. Most of these vendors do not even share shipment history, hence organizations are forced to store and link this continuous flow of information. Many times incorrect data gets fed into system through these external sources. We see data like order shipment data being before order book data etc. This results in: • Order lost in transit • Incorrect shipping address • Order sent to wrong address • Shipping wrong items • Late shipments External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
Inventory Inventory management can help a manufacturer / supplier in improving accuracy, cost savings, and speed. This in turn will help organizations have better control on operations and reduce cost of goods. Today, most of the manufacturers are facing challenges in inventory management systems. Few challenges listed below: • Limited standardization in management systems, business users, inventory integration, and movement checkpoints • Limited inventory reconciliation on regular intervals • Data discrepancies between demand planning and inventory planning systems • Improper logging of inventory information • Inaccurate data fed into forecasting systems Banking Financial organizations are required to meet regulatory compliance requirements according to the law of the land to avoid instances such as housing crisis. At the same time, data quality issues lead to transparency and accountability problems. Hence, quality of data for banking needs to be measured along the dimensions of completeness, accuracy, consistency, duplication, and integrity. There is also a need to ensure information that is being shared complies with information privacy and protection laws and regulations.<|endoftext|>Pharma Pharmaceutical industry gets warnings on regular intervals for falsifying, altering, or failing to protect essential quality data on their drug manufacturing process and its validation, resulting in huge business risks. According to US Food and Drug Administration (USFDA) regulations, pharma companies are mandated to maintain manufacturing and drug testing data. Many times, issues occur due to human data entry errors and machine errors like data recording failures. These regulations have even resulted in shutdown of plants causing huge losses.<|endoftext|>Today’s state Many organizations are constantly investing in data quality to improve their efficiency and customer interactions through data insights. Majority of the companies are suffering from common data errors like incomplete or missing data, outdated information, and inaccurate data. This level of inaccurate data jeopardizes business that relies on business intelligence for taking key decisions. An organization’s current level of maturity can be assessed from data quality maturity model given below: Insurance When it comes to insurance industry, data not only helps run operations, but also helps them ensure that claims have the required and correct information. Gener- ally, the following issues are found in the claims data: • Invalid diagnosis codes • Incorrect pin codes of hospitals • Incomplete forms with missing crucial data like gender, patients’ stay in hospital, etc.<|endoftext|>• Inaccurate age data External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
Automated data quality framework: This calls for a need of a strong quality framework, which can validate standard business rules against the processed data coming from external sources into enterprise systems. This framework should be able to report incorrect data and related information. A framework which has easily configurable rules and threshold values can be set by business using simple text through a user interface directly into framework. The framework can connect to almost all kinds of data sources — mainframes, file systems, relational database management system (RDBMS) systems, analytical databases such as columnar, massively parallel processing (MPP), in-memory data base, NoSQL databases, Hadoop, web services, packaged enterprise applications, OLAP applications, software as a service, and cloud-based applications.<|endoftext|>The details of common business rules are also collected by our subject matter experts (SMEs) in retail, consumer packaged goods (CPG), logistics, manufacturing, banking, healthcare, insurance, life sciences, capital markets, financial services, cards and payments, energy, utilities, communications, and services. This helped in the creation of a backbone for our standard quality framework where one can add / remove rules according to the specific business need.<|endoftext|>The user can pick a set of business rules and schedule it according to their need. An automated report gets generated which is emailed to the concerned parties. It is recommended to go with open source solution to bring down the cost of development and maintenance of the tool. We have used a combination of tools--Talend and Python scripts for the development. This framework can be based out of other open source solutions like KETL, Pentaho Data Integrator - Kettle, Talend Open Source Data Integrator, Scriptella, Jaspersoft ETL, GeoKettle, CloverETL, HPCC Systems, Jedox, Python, Apatar. The framework can also be enhanced further to carry out data profiling and data cleansing on an “as- needed” basis.<|endoftext|>Infosys Automated data quality framework consists of a configurable data quality (DQ) framework with built-in, reusable rules across domains with the following: • Standard business rules which can validate the processed data and information from third |
Continue # Infosys Whitepaper
parties • Framework reports incorrect data and related information crossing the thresholds • Capability to easily configure rules and threshold values independently • Daily automated report generation, post job completion, enables independent operations for business Data quality maturity model Think & Act Local Think Global & Act Local Think Global & Act collectively Think & Act Global Matured Data Governance model • Limited awareness of data quality • Certain defined rules for data quality and integration for a specific module based on production issues encountered • Duplication of data across systems with no established system for data quality validation • Data inconsistency across systems • Organizations / programs accepting the impact of inconsistent, inaccurate, or unreliable data • Steps initiated to identify corrupt data • Gains are defined more at project level • Data inconsistency across systems • Organizations / programs accepting the impact of inconsistent, inaccurate, or unreliable data • Steps initiated to identify corrupt data • Gains are defined more at project level • Establishment of a well defined and unified data governance model • Regular checks and reporting of established business rules and data quality on a defined frequency • Organization shift toward management of data as business critical asset • System information and data is trusted • Key metrics for data quality is tracked against the defined variance percentage • Action items are tracked to closure for any variances beyond the agreed limit • ROI for data quality projects is tracked External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
Realizing the return on investment (ROI) for data quality Today, businesses need relevant data to make informed decisions. Decisions and communications based out of bad data carries substantial risks to business performance. For any data-driven organization, it is important to ensure that utmost standards of data quality are met and the organization has scheduled processes to validate quality along with the data that is being pumped in and out of the organization. We also need to ensure that a structured methodology is being followed in data quality metric definition and its validation on regular intervals. Few possible outcomes of successful implementation of a strong data quality framework are: Marketing: Accurate data helps drive more effective campaigns for the intended target audience Shipping: Cost savings and operational efficiencies achieved with basic address validation and order-related data quality checks Inventory: Faster turnover of stock Insurance: Complete information about client’s risk exposure enabling more accurate decisions on policy costs Banking: Ability to detect fraud patterns and improved customer service Pharma: Gain more compliance as per FDA regulations External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: The future of enterprise test automation
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
WHITE PAPER RPA: THE FUTURE OF ENTERPRISE TEST AUTOMATION
---
Page: 2 / 8
---
RPA: The future of enterprise test automation If one word defines the 21st century, it would be speed. Never before has progress happened at such a breakneck pace. We have moved from discrete jumps of innovation to continuous improvement and versioning. The cycle time to production is at an all-time low. In this era of constant innovation where the supreme need is to stay ahead of the competition and drive exceptional user experiences, product quality deployed in production is paramount to ensure that speed does not derail the product as a whole. A robust testing mechanism ensures quality while allowing faster release and shorter time to market – so essential for that competitive edge. Today, USD 550bn is spent on testing and validation annually. It is also the second largest IT community in the world. That is a significant investment and effort being put into this space already, but is it delivering results? In the past five or so years, there has been a push from CXOs, based on recommendations from industry experts and analysts, to go for extreme automation. Companies have been adopting multiple tools, opensource technologies, and building enterprise automation frameworks. This attempt to achieve end-to-end automation has created a mammoth network of tool sets in the organization that may or may not work well with each other. This is how test automation was done conventionally; it still requires elaborate effort to build test scripts, significant recurring investment for subscription and licensing, and training and knowhow for multiple tools. By some estimates, traditional testing can take up to 40% of total development time – that is untenable in the agile and DevOps modes companies operate in today. What if this ongoing effort can be eliminated? What if the need for multiple tools can be done away with? Enter Robotic Process Automation (RPA) in testing. While originally not built for testing, RPA tools show great potential to make testing more productive, more efficient, and help get more features to the market faster – giving them an edge over conventional tools (see Fig 1). The state of testing and validation in the enterprise Product features Traditional automation tools RPA Tools Coding Knowledge • Coding knowledge is essential to develop automated scripts • Programming knowledge and effort is needed to build the framework, generic reusable utilities and libraries • These tools offer codeless automation. Developing automated scripts requires some effort for configuration and workflow design. However, coding is minimal compared to traditional tools • Generic reusable utilities are available as plug-and-play components Maintenance Extensive maintenance effort required Minimal test maintenance effort required Cognitive automation No support for cognitive automation RPA tools are popular for supporting cognitive automation by leveraging AI Plugin support Limited plugins are available for different technologies Plugins are available for all leading technologies Orchestration and load distribution Load distribution during execution requires additional effort to develop the utilities and set up the infrastructure This feature is available in most RPA tools. For example, feature of a popular RPA tool helps in load distribution during execution without any additional effort aside from configuration Automation development Productivity Test development productivity is low since custom coding is required most of the time Test development productivity is high as most generic activities are available as plug-and-play OCR for text recognition This feature is not available This feature is available in all RPA tools Advanced image recognition This feature is not available. Either additional scripting or a third-party tool is needed to support this This feature is available in all RPA tools In-built screen and data scarping wizards This feature is not available and requires integration with other tools This feature is available in all RPA tools Fig. 1: Traditional automation tools vs. RPA External Document © 2020 Infosys Limited
---
Page: 3 / 8
---
External Document © 2018 Infosys Limited RPA – the next natural evolution of testing automation In the last decade, automation has evolved and matured with time along with changing technologies. As discussed, automation in testing is not new but its effectiveness has been a challenge – especially the associated expense and lack of skill sets. RPA can cut through the maze of tool sets within an enterprise, replacing them with a single tool that can talk to heterogenous technology environments. From writing stubs to record and playback, to modular and scriptless testing, and now to bots, we are witnessing a natural evolution of test automation. In this 6th Gen testing brought about by RPA orchestration, an army of bots will drastically change the time, effort, and energy required for testing and validation. We are heading towards test automation that requires no script, no touch, works across heterogenous platforms, creates extreme automation, and allows integration with opensource and other tools.<|endoftext|>According to Forrester¹, “RPA brings production environment strengths to the table.” This translates into production level governance, a wide variety of use cases, and orchestration of complex processes via layers of automation. RPA allows companies to democratize automation very rapidly within the testing organization. RPA has an advantage over traditional tools in that it can be deployed where they fail to deliver results (see Fig 2). For instance, when: • the testing landscape is heterogenous with complex data flows • there is a need for attended and unattended process validation • there is a need to validate digital system needs An RPA solution can bring in a tremendous amount of simplicity for building out bots quickly and deploying them with the least amount of technical know-how and skills that even business stakeholders can understand.<|endoftext|>External Document © 2020 Infosys Limited
---
Page: 4 / 8
---
However, the challenges in testing are not limited to writing and executing test cases. The automation needs to also handle the periphery of testing activities – validating that all components of the environment are up and running and that test data is available on time. This dependency on the peripheral activities, and the teams running them, could cost valuable time. For instance, for a large banking client, this dependency put a lot of pressure on the testing team to finish a sprint in 5 days. Using RPA, we were able to automate the batch monitoring and batch rendering process. We also automated synthetic test data creation and data mining processes reducing time to market by 40%. To really provide value for testing and validation, RPA needs to provide some very testing specific capabilities such as: A Cohesive Automation Platform: Enabling enterprises to leverage the full potential of automation with process discovery, process automation (attended, unattended, and UI based), combined with process orchestration capabilities. This should include a test automation interface that can bridge the gap between test management tools and automated test cases. A workflow-driven test automation approach can make the solution business- centric. Native AI Capabilities: A cognitive engine can leverage various data sources to deliver pervasive intelligence across process design, management, and execution.<|endoftext|>Security and Scalability: The solution should allow running multiple bots on a single virtual machine, have robust access management with a credential vault built into the product, and offer out-of-the-box technology-specific adaptors Fig 2. How RPA tools address traditional testing challenges Challenge Areas Performance of RPA tools Test Data Management Data-driven testing is supported by many traditional tools. RPA can manage data form files like Excel/JSON/XML/DB and use these for testing Testing in different environments End-to-end business processes navigate through various environments like mainframe/web/DB/client server applications. RPA tools can easily integrate this process across multiple systems. Thus, RPA tools simplify business orchestration and end-to-end testing compared to other testing tools Traceability While RPA tools do not directly provide test script traceability, there are methods to enable this functionality. For instance, user stories/requirements stored in JIRA can be integrated with RPA automation scripts using Excel mappings to create a wrapper that triggers execution Script versioning A batch process can be implemented in the RPA tool to address this CI-CD integration This is available in most of the RPA Tools Reporting and defect logging RFA tools have comprehensive dashboards that showcase defects that can be logged in Excel or JIRA through a suitable wrapper Error handling This feature is available in all RPA tools External Document © 2020 Infosys Limited |
Continue # Infosys Whitepaper
---
Page: 5 / 8
---
AssistEdge for Testing in Action One of our clients, a large investment company based in Singapore realized the benefits of RPA based testing when it helped them save 60% of testing efforts. They were running legacy modernization as a program using mainframe systems that are notoriously difficult to automate using traditional automation tools. RPA with its AI and OCR capabilities and the ability to traverse through and handle any technology, was easily able to automate 800+ test cases in the mainframe.<|endoftext|>In another instance, a large banking client was using package-based applications that used multiple technologies to build different screens. It becomes difficult to integrate multiple tools in this scenario. With RPA, we were able to automate the end-to-end workflow for each application using just one tool. This helped reduce the overall maintenance effort by over 30%. Another one of our clients was facing a quality assurance (QA) challenge where bots were being deployed without testing. We developed specific QA bots with added exceptional handlers to check whether the bot is actually handling exceptions and if it fails then how it comes back to the original state. By validating the bots, we improved the overall efficiency by 30%.<|endoftext|>External Document © 2020 Infosys Limited
---
Page: 6 / 8
---
Take advantage of the Edge The COVID-19 pandemic has accelerated the organizations’ need to be hyper- productive. Companies are realizing that they have to transform to build the capabilities that will prepare them for the future. Companies are thinking of ways to drive efficiency and effectiveness to a level not seen before. There is a strong push for automation to play a central role in making that happen. This is also reflected in the testing domain, where any opportunity for improvement will be welcome.<|endoftext|>Realizing the need to drive testing efficiencies and reduce manual effort, organizations want to adopt RPA in testing. We are at a tipping point where the benefits of RPA adoption are clear, what is needed is that first step towards replacing existing frameworks. Recognizing the potential of RPA in testing, EdgeVerve and Infosys Validation Solutions (IVS) have been helping clients simplify and scale up test automation with AssistEdge for testing. AssistEdge brings the experience of handling tens of thousands of processes with different technologies and environment systems to test automation, helping navigate heterogenous environments with ease. By building an army of bots for functional, regression, and user acceptance testing, it can help achieve 100% test automation with incredible accuracy. In addition to being faster to build and deploy, AssistEdge reduces the overall time to value and also the investment needed for deploying and managing RPA infrastructure. Infosys Validation Solutions’ (IVS) engineering-led QA capabilities enable enterprises to effortlessly scale up testing in real-time, delivering unprecedented accuracy, flexibility, and speed to market. With its vast experience, IVS enables clients across industry verticals to successfully implement an automated testing strategy, allowing them to move away from tedious and error prone manual testing, thereby improving performance and software quality while simultaneously resulting in effort- and cost-savings.<|endoftext|>The journey to RPA-based test automation has to be implemented. And those that adopt faster will hold a competitive advantage in faster realization of benefits. The question is, are you willing to take the leap? Want to know more about the potential of RPA in testing? Write to us at askus@infosys.com External Document © 2020 Infosys Limited
---
Page: 7 / 8
---
About AssistEdge AssistEdge offers a cohesive automation platform that enables enterprises to scale in their automation journey. It offers enterprises with a comprehensive suite of products enabling them to drive initiatives around process discovery, intelligent automation and digital workforce orchestration. AssistEdge has helped enterprises unlock value in the form of reduced service time, faster sales cycles, better resource allocation, accelerated revenue recognition and improved efficiency among others.<|endoftext|>About EdgeVerve EdgeVerve Systems Limited, a wholly owned subsidiary of Infosys, is a global leader in AI and Automation, assisting clients thrive in their digital transformation journey. Our mission is to create a world where our technology augments human intelligence and creates possibilities for enterprises to thrive. Our comprehensive product portfolio across AI (Infosys Nia), Automation (AssistEdge) and AI enabled Business Applications (TradeEdge, FinXEdge, ProcureEdge) helps businesses develop deeper connections with stakeholders, power continuous innovation and accelerate growth in the digital world. Today EdgeVerve’s products are used by global corporations across financial services, insurance, retail, consumer & packaged goods, life sciences, manufacturing telecom and utilities. Visit us to know how enterprises across the world are thriving with the help of our technology. https://www.edgeverve.com/ About Infosys Infosys is a global leader in next- generation digital services and consulting. We enable clients in 46 countries to navigate their digital transformation. With over three decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.<|endoftext|>Visit www.infosys.com to see how Infosys (NYSE: INFY) can help your enterprise navigate your next.<|endoftext|>External Document © 2020 Infosys Limited
---
Page: 8 / 8
---
© 2020 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected About the
Authors Vasudeva Naidu AVP – Delivery Head Infosys Validation Solutions Sateesh Seetharamiah VP – Global Product Head – AssistEdge EdgeVerve References: ¹”RPA And Test Automation Are More Friends Than Foes”, Forrester Research, Inc., May 15, 2020 https://www.infosys.com/services/it-services/validation-solution/white-papers/documents/rpa-tool-testing.pdf https://www.infosys.com/services/it-services/validation-solution/documents/automation-testing-assistedge.pdf https://www.ibeta.com/risks-of-not-testing-software-properly/#:~:text=The%20cost%20to%20fix%20bugs,profit%20loss%20during%20 software%20downtime.<|endoftext|>
***
|
# Infosys POV
Title: Energy Transition: Hydrogen for Net Zero
Author: Infosys Consulting
Format: PDF 1.7
---
Page: 1 / 11
---
An Infosys Consulting Perspective By Sundara Sambasivam & Shivank Saxena Consulting@Infosys.com | InfosysConsultingInsights.com Energy Transition Hydrogen for Net Zero
---
Page: 2 / 11
---
Energy transition: Hydrogen for Net Zero | © 2022 Infosys Consulting 2 Energy transition: Hydrogen for Net Zero The pressure to reduce carbon emissions to achieve the target of net zero emissions by 2050 is ever-increasing. There is no silver bullet, no ‘one-size-fits-all’ solution to address this challenge. At this point in time, there are many different energy sources with varying levels of investment that are being explored and tested to enable our transition towards net zero.<|endoftext|>Hydrogen (H2) is one of the most abundant elements found in nature. For decarbonization of the industry, it is considered a key component; opening new frontiers and complementing existing solutions. This series of papers aims to share some interesting perspectives on this sector, the associated challenges, and why it could play a significant role in the decarbonisation agenda. Current limitations in tech, scaling challenges, and feasibility concerns are just some of the reasons it has not yet been harnessed fully. However, hydrogen has significant potential to manage this challenging journey towards net zero.<|endoftext|>
---
Page: 3 / 11
---
Types of Hydrogen Both the production source and process used define the hydrogen type. Below is a list of diverse hydrogen types produced today based on production method and source (The hydrogen colour chart, 2022).<|endoftext|>Energy transition: Hydrogen for Net Zero | © 2022 Infosys Consulting 3
---
Page: 4 / 11
---
Energy transition: Hydrogen for Net Zero | © 2022 Infosys Consulting 4 MARKET OUTLOOK - Production & Economies Production and demand outlook According to the 2021 Report of International Energy Agency on Hydrogen, only 0.49 Mt of hydrogen was produced via electrolysis. Although this was only 0.5% of overall global production, the outlook on green and blue hydrogen is promising. It has become an essential element for any state policy on energy transition for net zero. By 2050, more than 80% of production is estimated to be of green or blue hydrogen. Demand will primarily be driven by power, transport, and industry where demand for green hydrogen has the potential to grow 200% by 2050.<|endoftext|>Figure 2: Global hydrogen production and demand outlook (Harnessing Green Hydrogen: Opportunities for Deep Decarbonization in India, 2022)
---
Page: 5 / 11
---
Economic outlook The current hydrogen production costs from different methods are listed in Figure 3 (Hydrogen Strategy: Enabling a low-carbon Economy, 2020). Coal and other fossil fuel-based production is inexpensive at around 2 USD/kg. Prices increase by 10 to 20% when using carbon capture and storage (CCS). Electrolysis powered by renewable energy (RE) is the most expensive at 5 to 10 USD/kg and is not currently a competitive price. This needs to decrease to at least 2 USD/kg or lower in the next decade to directly compete with fossil fuels as an energy source.<|endoftext|>There are several elements that would play a critical role in driving the cost of the end-to-end supply chain of production and distribution. These include higher levels of innovation through research and development (R&D) and the right investment through disruptive digital technologies like artificial intelligence, the Internet of Things, blockchain smart contracts, certificates, and digital twin.<|endoftext|>Energy transition: Hydrogen for Net Zero | © 2022 Infosys Consulting 5 Figure 3: Hydrogen production costs by source and method (Hydrogen Strategy: Enabling a low-carbon economy, 2020)
---
Page: 6 / 11
---
Economic outlook Renewables and electrolyser costs drive green hydrogen prices and are both showing declining trends. Electrolyser costs are expected to fall by 30% in the next ten years (Harnessing Green Hydrogen, 2022). Industrial manufacturers like Siemens Energy and Linde have already started setting up some of the world’s biggest electrolyser production facilities in line with the European Union’s (EU) strategy (REPowerEU plan May 2022) for fuel diversification, which will need a 27 billion EUR direct investment in domestic electrolyser and distribution of hydrogen in the EU, excluding the investment of solar and wind electricity (REPowerEU Plan, 2022).<|endoftext|>The US, on the other hand, has announced future investments of up to 9 billion USD from 2022 to 2026 through its ‘Infrastructure Investments and Jobs Act’ (García-Herrero et al, 2022). The key difference is that US policy plans to use both blue and green hydrogen in the fuel mix, while the EU views blue hydrogen as a temporary solution only. Based on policy support and market conditions, the industry will decide on a future roadmap. Green credits and green hydrogen trading can turn many fossil fuel-dependent countries into future energy suppliers. Various states and corporates are funding green and brown field projects which have created finance opportunities for venture capital, underwriters, and insurance firms.<|endoftext|>Energy transition: Hydrogen for Net Zero | © 2022 Infosys Consulting 6 Figure 4: Renewables and electrolyser cost outlook (Harnessing Green Hydrogen, 2022)
---
Page: 7 / 11
---
Economic outlook Energy transition: Hydrogen for Net Zero | © 2022 Infosys Consulting 7 Figure 5: Renewables and electrolyser cost outlook (Harnessing Green Hydrogen, 2022)
---
Page: 8 / 11
---
Figure 6: Hydrogen value chain opportunities Hydrogen Value Chain Opportunities Figure 6 outlines the end-to-end value chain from production and electrolyser plant setup, operations in conjunction with RE parks, storage (long- and short-term), distribution (liquified or gaseous), and consumption applications (power, transportation, and industries). It gives an overview on the current usage of Hydrogen in industry applications. New emerging areas where significant opportunities exist for growth are primarily transportation (heavy duty vehicles and shipping), long-term energy storage (sub-surface), and green ammonia (production and energy carrier). Hydrogen can contribute directly to decarbonising the biggest polluters like steel, refineries, and ammonia production. Although Hydrogen has a clean burn, its production is not clean. Hydrogen production from fossil fuels resulted in 900 Mt CO2 emissions in the year 2020 (Global Hydrogen Review 2021, 2021). High demand for green and blue hydrogen and hydrogen-based fuels could reduce up to 60 Gt of CO2 emissions between 2021 and 2050, accounting for a reduction of 6% of total cumulative emissions (Hydrogen, 2022).<|endoftext|>Some of the biggest polluters in the transportation sector include long-haul freight, heavy-duty vehicles, maritime, and jet fuel. Decarbonizing them is not easy. By 2050, green ammonia can meet 25% of shipping fuel demand to meet the International Maritime Organization’s goal of reducing CO2 emissions by 50% from 2008’s levels. Hydrogen fuel cells can gear up short distance rides such as ferry journeys (Harnessing Green Hydrogen, 2022). With air travel growth, a significant carbon footprint increase is expected in aviation which already has the highest carbon emission intensity. Options like hydrogen fuel cells, hydrogen turbines, and hydrogen-based electrolytic synthetic fuel exist to decarbonize aviation, but each option has its merits and demerits. Big corporations like Airbus or start-ups like ZeroAvia have already presented their roadmaps for a hydrogen-based carrier in the next decade.<|endoftext|>For building, hydrogen can be blended into existing gas networks for both residential and commercial complexes. It can also be used by boilers and fuel cells. Its biggest promise is in long-term energy storage. This will impart stability to renewables-based generation and grid operations. Today, new gas turbines can also use hydrogen as a fuel component.<|endoftext|>Energy transition: Hydrogen for Net Zero | © 2022 Infosys Consulting 8 Opportunities for the industry
---
Page: 9 / 11
---
What’s next? In our next articles, we will discuss the challenges of this emerging sector, some exciting industry projects underway around hydrogen, support, and digital solutions needed to help pave the way to net zero. Infosys Consulting |
Continue # Infosys POV
achieved its net zero goals 30 years ahead of time and is working to help our partners in their energy transition journey towards their own net zero goals.<|endoftext|>Energy transition: Hydrogen for Net Zero | © 2022 Infosys Consulting 9
---
Page: 10 / 11
---
MEET THE EXPERTS Sundara Sambasivam Associate Partner - Services, Utilities, Resources and Energy Practice Sundara.Sambasivam@infosys.com “The lines betw een digita l and physi cal retail will conti nue to blur” Sources • García-Herrero, A.,Tagliapietra, S. & Vorsatz, V. (2021), ‘Hydrogen development strategies: a global perspective’, Bruegel, August, [Online], Link: [Accessed: 21 Nov 2022].<|endoftext|>• ‘Global Hydrogen Review 2021’, (2021), International Energy Agency: IEA, [Online], Link Accessed:16 Nov 2022].<|endoftext|>• ‘Harnessing Green Hydrogen: Opportunities for Deep Decarbonisation in India’, (2022), Niti Aayog & Rocky Mountain Institute (RMI), June, [Online], Link [Accessed: 16 Nov 2022].<|endoftext|>• ‘Hydrogen’, (2021), International Energy Agency: IEA, [Online], Link [Accessed: 16 Nov 2022].<|endoftext|>• ‘Hydrogen’, (2022), International Energy Agency: IEA, [Online], Link [Accessed: 16 Nov 2022].<|endoftext|>• ‘Hydrogen Strategy: Enabling A Low-Carbon Economy’, (2020), U.S. Department of Energy, July, [Online], Link [Accessed: 16 Nov 2022].<|endoftext|>• ‘REPowerEU Plan’, (2022), European Commission, [Online], Link [Accessed: 16 Nov 2022].<|endoftext|>• ‘The hydrogen colour chart’, (2022), National Grid, [Online], Link [Accessed: 16 Nov 2022].<|endoftext|>Shivank Saxena Senior Consultant - Services, Utilities, Resources and Energy Practice Shivank01@infosys.com Energy transition: Hydrogen for Net Zero | © 2022 Infosys Consulting 10 Over 22 years of global experience, Sundar has led a number of business and digital transformation and outcome-based efficiency turnaround programmes across the Energy and Utilities (Transmission & Distribution). Sundar is excited to collaborate and help our clients to navigate the journey of Energy Transition towards the net zero ambitions.<|endoftext|>Over 11 years of experience, Shivank has led digital transformation projects, enabling end-to-end systems’ delivery for clients across industries and sectors. He has ensured sustained value delivery on multiple engagements by building roadmaps and driving planning-to-execution for various business-led initiatives. He is passionate about supporting the industry to meet its net zero goals, and currently helps clients innovate to drive energy transition initiatives.<|endoftext|>
---
Page: 11 / 11
---
consulting@Infosys.com InfosysConsultingInsights.com LinkedIn: /company/infosysconsulting Twitter: @infosysconsltng About Infosys Consulting Infosys Consulting is a global management consulting firm helping some of the world’s most recognizable brands transform and innovate. Our consultants are industry experts that lead complex change agendas driven by disruptive technology. With offices in 20 countries and backed by the power of the global Infosys brand, our teams help the C- suite navigate today’s digital landscape to win market share and create shareholder value for lasting competitive advantage. To see our ideas in action, or to join a new type of consulting firm, visit us at www.InfosysConsultingInsights.com. For more information, contact consulting@infosys.com © 2022 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names, and other such intellectual property rights mentioned in this document. Except as expressly permitted, neither this document nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printed, photocopied, recorded or otherwise, without the prior permission of Infosys Limited and/or any named intellectual property rights holders under this document.
***
|
# Infosys Whitepaper
Title: Leveraging advanced validation techniques for retail size optimization
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
WHITE PAPER LEVERAGING ADVANCED VALIDATION TECHNIQUES FOR RETAIL SIZE OPTIMIZATION - Divya Mohan C, Chitra Sylaja
---
Page: 2 / 8
---
Introduction In today’s highly volatile business environment, retailers that want to remain profitable must be able to predict customer demand and ensure availability of the right products in the right store at the right time. This is a challenging task when merchandise such as apparel and footwear are offered in a range of sizes. To maximize revenue and profitability, retailers need a strategy that allows them to sell goods at full price while reducing markdowns.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
How does size optimization help retailers Size optimization transforms historical sales and inventory data into size-demand intelligence. This enables smart buying and allocation at the size level to match customer needs at each store. The size profile optimization (SPO) application provides optimized size curves for products at the store level. SPO uses past sales history to deliver the right sizes to the right stores and reduces the imbalance between consumer demand and inventory. By leveraging a combination of back-end data crunching technologies and front- end size profiling tools, SPO enables allocators, buying coordinators and buyers to leverage sales and inventory data and create precise style/color size scales for each store. This degree of specificity can help buyers and allocators create size profiles that recover lost sales, improve the size balance across the retail chain and launch new products and categories based on previous successes.<|endoftext|>Key challenges in size optimization Retailers find it challenging to procure and maintain the right mix of merchandise in the right size at the right store. The store- specific size profile plays an important role in helping retailers make assortment and allocation decisions. As the competition evolves, new challenges arise and retailers need suitable approaches to counter these challenges.<|endoftext|>Ability to understand consumer demand Customer demand varies across stores. Understanding the size level demand at each store for different products is critical to meeting each unique customer demand. This requirement can be accurately captured and analyzed through granular data that represents size profiles for every store/product.<|endoftext|>Historical performance is not true customer demand As size profiling is based mainly on sales history, a cleansed history is necessary to generate an accurate size profile. Typically, historical data comprises of profitable sales, lost sales and markdowns. To get relevant size profiling data, it becomes necessary to filter out inventory data related to stock-outs, markdowns, margins, etc., from the consolidated data. This ensures that the right data set is used to design better analytical models and prevent biases arising from extreme data points.<|endoftext|>Analyze and process large volume of data Gathering information at the granular level of store and size can generate large data volumes that are difficult to handle and analyze. Retailers may find it challenging to derive impactful business decisions from such large data sets.<|endoftext|>Key influencers in size optimization: Predicting customer decisions Consumers are always looking for specific merchandise; say for instance, a shoe that has the name of a basketball legend. In case the store does not have this particular product, they should be able to offer the customer a suitable alternative through smart product grouping. Size profile optimization can help retailers predict similar products and position them appropriately within the store.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
Region-specific demand for products Certain products have high customer demand in specific regions. Size profile optimization can analyze historical data to ensure adequate stock of high demand products in these stores.<|endoftext|>Reducing imbalance between consumer demand and inventory Customer demand for merchandise depends on factors such as holiday seasons, upcoming special events such as a marathon, etc. Size optimization can predict the type of merchandise that needs to be in stock during these seasons. This ensures that the inventory is never out-of-stock for such products.<|endoftext|>Recovering lost sales On the other hand, owing to inaccurate or excessive merchandise allocation, some stores are forced to conduct clearance sales at the end of a season to reduce their inventory. SPO can assist retailers in allocating accurate size profiles, thereby ensuring only the required inventory is stocked to meet existing business demand.<|endoftext|>Allocate new products/ new stores based on similar products/stores Size profile optimization goes beyond allocating size profiles based on historical sales and inventory data. Retail merchandising mandates that the allocation of a new article to any store is profitable to the company. The SPO engine can map a new article to similar existing articles and create profiles based on these. Similarly, SPO can map a new store to similar existing stores and create relevant profiles.<|endoftext|>Demand Transfer Walk Switch Point Product Type - Apparel for Men Class Men - Top 3 2 1 Navigate maximum up to Class Level Style 123 Style-Color 123-143 Style-Color/Size S, M, L, XL, XXL SHORT SLEEVE Top Style 132 Style-Color 132-145 Style-Color/Size S, M, L, XL, XXL LONG SLEEVE Top External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
Functional testing Functional testing is performed to ensure that size profiling is done according to business expectations. There are four key ways to conduct functional testing: 1. Validate pre-load data 2. Analyze data in SPO 3. Validate of the size profile engine based on business rules 4. Validate business intelligence reports Validation of pre-load data Data from various sources such as point-of- sale (POS) terminal, digital stores, etc., are loaded into the database. The raw data can exist in any form (flat file or XML). To verify that the right data is fed into the database, different validation techniques can be used. These include: • Comparing data received from different input source systems through XML/flat file format with data available in the intermediate database • Ensuring that data is loaded according to business rules defined in the system • Ensuring the data loaded from the source to intermediate databases is according to the mapping sheet specified in the requirement Analysis of data in spo Retail merchants possess large amounts of historical data accumulated over several years that can be fed into the size profile engine for profile generation.<|endoftext|>Testing teams should ensure that the correct data and possible scenarios are sampled and transferred to the size engine. a Data from POS, Digital Stores Intermediate DB d b c SPO Why validation is critical for size optimization? Previously, retailers were unaware of the importance of size optimization. They would randomly determine an average size profile and apply it across all stores and, occasionally, across geographic locations. Retailers lacked the right approach to leverage the large amount of historical data readily available. This often resulted in early season stock-outs in some stores and markdowns for the same merchandise in other stores. Additionally, retailers struggled to make intelligent data-driven decisions owing to the lack of automated approaches and techniques to validate data quality and integrity issues.<|endoftext|>Validation strategy Validation involves processing a large amount of data from various sources according to the format specified by the size profile engine. A validation strategy can help retailers meet customer demand by accurately projecting each store’s future sales and inventory needs. It can support the business goals of the retailer, i.e., reduce markdowns, increase full price sales and drive higher revenue. Besides validation, size profiling also includes functional and performance testing.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
Non-functional testing Any size profile optimization project involves processing a large volume of structured data. Performance testing ensures that size profile engines perform optimally and that size profiles are generated within the stipulated time limits to support business needs. For best results, the test environment for performance testing should be similar to the product environment. Further, if the performance service level agreement (SLA) is not met, then the advantages of size profile optimization are lost. Performance can be monitored by different tools that are available in the market. The typical performance testing check-points are: • Ensure data movement in each stage is completed according to the SLA • Monitor system performance on maximum data load Validation of reports Once the size profiles are generated, business users can compare the profiles for different |
Continue # Infosys Whitepaper
products and allocate them based on analytical reports drawn using business intelligence report-generation mechanisms.<|endoftext|>Analytical reports are generated based on the business rule set. The testing team validates the accuracy of the report data with data from the data warehouse and verifies the usefulness of information displayed to the business.<|endoftext|>The reports generated by the size profile engine provide the following key details: • Allocation by store – How many articles of a particular size have been allocated to a particular store • Allocation percentage at various levels such as class, style, style-color, concept, etc • Effectiveness of size profile – Business can measure the effectiveness of size profiles in improving allocation to stores Validation of the size profile engine based on business rules Once data is fed into the size profile engine, it needs to be processed according to business rules specified within the system. Business rules are set to analyze the accuracy of size profiling. The size profile engine can analyze and process data using these validation techniques: • In cases where the business rule should exclude stock-out data and sales data having a margin filter greater than 10% for a particular set of merchandise, the validation team verifies that the size profile engine has not considered such data for profile generation • The validation team has to ensure that relevant data is used to determine the appropriate profile for the introduction of a new article/ store. Often, the data used may be incorrect owing to non- availability of relevant data for the new article/ store To execute high-level validation for business rules, the following validation techniques can be used by validation teams: • Compare data on new products with data on existing/similar products to verify that a similar size profile is generated • Ensure that the correct sample of data is selected for verifying all the business rules • Monitor and verify that size profiles are generated for every size of a particular style/color of a product • Ensure that the total size profile generated for a particular style/color of an article is 100% External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
Conclusion Size profile optimization helps retailers effectively stock the right sizes in stores based on various parameters, thereby enabling them to maximize profit, reduce markdowns and recover lost sales. Historical sales and inventory data is analyzed and transformed to drive critical business decisions. Here, data quality and data analysis play a vital role. By leveraging the right validation strategy with appropriate validation techniques, retailers can ensure that all possible business scenarios are considered and accurate data is chosen for size optimization decisions.<|endoftext|>References http://www.sas.com/industry/retail/sas-size-optimization http://www.oracle.com/us/industries/retail/retail-size-profile-optimize-ds-078546.pdf External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: An Insight into Microservices Testing Strategies
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
WHITE PAPER AN INSIGHT INTO MICROSERVICES TESTING STRATEGIES Arvind Sundar, Technical Test Lead Abstract The ever-changing business needs of the industry necessitate that technologies adopt and align themselves to meet demands and, in the process of doing so, give rise to newer techniques and fundamental methods of architecture in software design. In the context of software design, the evolution of “microservices” is the result of such an activity and its impact percolates down to the teams working on building and testing software in the newer schemes of architecture. This white paper illustrates the challenges that the testing world has to deal with and the effective strategies that can be envisaged to overcome them while testing for applications designed with a microservices architecture. The paper can serve as a guide to anyone who wants an insight into microservices and would like to know more about testing methodologies that can be developed and successfully applied while working within such a landscape.<|endoftext|>
---
Page: 2 / 8
---
Microservices attempt to streamline the software architecture of an application by breaking it down into smaller units surrounding the business needs of the application. The benefits that are expected out of doing so include creating systems that are more resilient, easily scalable, flexible, and can be quickly and independently developed by individual sets of smaller teams.<|endoftext|>Formulating an effective testing strategy for such a system is a daunting task. A combination of testing methods along with tools and frameworks that can provide support at every layer of testing is key; as is a good knowledge of how to go about testing at each stage of the test life cycle. More often than not, the traditional methods of testing have proven to be ineffective in an agile world where changes are dynamic. The inclusion of independent micro-units that have to be thoroughly tested before their integration into the larger application only increases the complexity in testing. The risk of failure and the cost of correction, post the integration of the services, is immense. Hence, there is a compelling need to have a successful test strategy in place for testing applications designed with such an architecture.<|endoftext|>Introduction External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
The definition of what qualifies as a microservice is quite varied and debatable with some SOA (service-oriented architecture) purists arguing that the principles of microservices are the same as that of SOA and hence, fundamentally, they are one and the same. However, there are others who disagree and view microservices as being a new addition to software architectural styles, although there are similarities with SOA in the concepts of design. Thus, a simpler and easier approach to understand what microservices architecture is about, would be to understand its key features: • Self-contained and componentized • Decentralized data management • Resilient to failures • Built around a single business need • Reasonably small (micro) The points above are not essentially the must-haves for a service to be called a microservice, but rather are ‘good-to-have.’ The list is not a closed one either, as it can also include other features that are common among implementations of a microservices architecture. However, the points provide a perspective of what can be termed as a microservice. Now that we know what defines a microservice, let us look at the challenges it poses to testers.<|endoftext|>The distributed and independent nature of microservices development poses a plethora of challenges to the testing team. Since microservices are typically developed by small teams working on multiple technologies and frameworks, and are integrated over light-weight protocols (usually ReST over HTTPs, though this is not mandatory), the testing teams would be inclined to use the Web API testing tools that are built around SOA testing. This, however, could prove to be a costly mistake as the timely availability of all services for testing is not guaranteed, given that they are developed by different teams. Furthermore, the individual services are expected to be independent of each other although they are interconnected with one another. In such an environment, a key factor in defining a good test strategy would be to understand the right amount of testing required at each point in the test life cycle.<|endoftext|>Additionally, if these services integrate with another service or API that is exposed externally or is built to be exposed to the outside world, as a service to consumers, then a simple API testing tool would prove to be ineffective. With microservices, unlike SOA, there is no need to have a service level aggregator like ESB (enterprise service bus) and data storage is expected to be managed by the individual unit. This complicates the extraction of logs during testing and data verification, which is extremely important in ensuring there are no surprises during integration. The availability of a dedicated test environment is also not guaranteed as the development would be agile and not integrated.<|endoftext|>Microservices architecture Challenges in testing microservices External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
In order to overcome the challenges outlined above, it is imperative that the test manager or lead in charge of defining the test strategy appreciates the importance of Mike Cohn’s Test Pyramidi and is able to draw an inference of the amount of testing required. The pictorial view emphasizes the need to have a bottom-up approach to testing. It also draws attention to the number of tests and in turn, the automation effort that needs to be factored in at each stage. The representation of the pyramid has been slightly altered for the various phases in microservice testing. These are: i. Unit testing The scope of unit testing is internal to the service and in terms of volume of tests, they are the largest in number. Unit tests should ideally be automated, depending on the development language and the framework used within the service.<|endoftext|>ii. Contract testing Contract testing is integral to microservices testing and can be of two types, as explained below. The right method can be decided based on the end purpose that the microservice would cater to and how the interfaces with the consumers would be defined. a) Integration contract testing: Testing is carried out using a test double (mock or stub) that replicates a service that is to be consumed. The testing with the test double is documented and this set needs to be periodically verified with the real service to ensure that there are no changes to the service that is exposed by the provider. b) Consumer-driven contract testing: In this case, consumers define the way in which they would consume the service via consumer contracts that can be in a mutually agreed schema and language. Here, the provider of the service is entrusted with copies of the individual contracts from all the consumers. The provider can then test the service against these contracts to ensure that there is no confusion in the expectations, in case changes are made to the service.<|endoftext|>iii. Integration testing Integration testing is possible in case there is an available test or staging environment where the individual microservices can be integrated before they are deployed. Another type of integration testing can be envisaged if there is an interface to an externally exposed service and the developer of the service provides a testing or sandbox version. The reliance on integration tests for verification is generally low in case a consumer- driven contract approach is followed.<|endoftext|>iv. End-to-end testing It is usually advised that the top layer of testing be a minimal set, since a failure is not expected at this point. Locating a point of failure from an end-to-end testing of a microservices architecture can be very difficult and expensive to debug.<|endoftext|>Mike Cohn’s Testing Pyramid E2E UI Testing Scope of Testing Execution Time Number of Tests Integration Testing Contrast Testing Unit Testing Approach to testing microservices and testing phases External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
• For unit testing, it would be ideal to use a framework like xUnit (NUnit or JUnit). The change in data internal to the application needs to be verified, apart from checking the functional logic. For example, if reserving an item provides a reservation ID on success in the response to a REST call, the same needs to be verified within the service for persistence during unit testing.<|endoftext|>• The next phase of testing in contract testing. In case there are several dissimilar consumers of the |
Continue # Infosys Whitepaper
service within the application, it is recommended to use a tool that can enable consumer-driven contract testing. Open source tools like Pact, Pacto, or Janus can be used. This has been discussed in further detail in the last example and hence, in the context of this example, we will assume that there is only a single consumer of the service. For such a condition, a test stub or a mock can be used for testing by way of REST over HTTPS Item ID. date Reservation ID Application Mocked Service Unit Testing Scope Integration Testing Scope Select an Item Reserve an item Integration Contract Testing Scope integration contract testing. Data being passed between the services needs to be verified and validated using tools like SOAPUI. For example, an item number being passed between the services that selects it to the one that reserves it. • E2E tests should ensure that dependency between microservices is tested at least in one flow, though extensive testing is not necessary. For example, an item being purchased should trigger both the ‘select’ and ‘reserve’ microservices.<|endoftext|>In order to get a clear understanding of how testing can be carried out in different scenarios, let us look at a few examples that can help elucidate the context of testing and provide a deeper insight into the test strategies used in these cases.<|endoftext|>• Scenario 1: Testing between microservices internal to an application or residing within the same application This would be the most commonly encountered scenario, where there are small sets of teams working on redesigning an application by breaking it down into microservices from a monolithic architecture. In this example, we can consider an e-commerce application that has two services a) selecting an item and b) reserving an item, which are modelled as individual services. We also assume there is a close interaction between these two services and the parameters are defined using agreed schemas and standards.<|endoftext|>Testing scenarios and test strategy External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
• Unit tests should ensure that the service model is catering to the requirements defined for interacting with the external service, while also ensuring that internal logic is maintained. Since there is an external dependency, there exists a need to ensure that requirements are clearly defined and hence, documenting them remains key. TDD approach is suggested where possible and any of the popular frameworks discussed in the previous example can be chosen for this.<|endoftext|>• Contract testing can be used in this case to test the expectations from consumer microservices, that is, the applications internal service, decoupling it from the dependency on the external web service to be available. In this context, test doubles, created using tools like Mockito or Mountebank, can be used to define the PayPal API’s implementation and tested. This is essentially integration contract testing and again needs to be verified with a live instance of the external service periodically, to ensure that there is no change to the external service that has been published and consumed by the consumer.<|endoftext|>• Integration tests can be executed if the third-party application developer • Scenario 2: Testing between internal microservices and a third-party service Here, we look at a scenario where a service with an application consumes or interacts with an external API. In this example, we have considered a retail application where paying for an item is modelled as a microservices and interacts with the PayPal API that is exposed for authenticating the purchase.<|endoftext|> Let us look at the testing strategy in each phase of the test cycle in this case: provides a sandbox (e.g. PayPal’s Sandbox APIii ) for testing. Live testing for integration is not recommended. If there is no availability of a sandbox, integration contract testing needs to be exercised thoroughly for verification of integration.<|endoftext|>• E2E tests should ensure that there are no failures in other workflows that might integrate with the internal service. Also, a few monitoring tests can be set up to ensure that there are no surprises. In this example, selecting and purchasing an item (including payment) can be considered an E2E test that can run at regular and pre-defined intervals to spot any changes or breaks.<|endoftext|>PayPal (External) API PayPal Sandbox API Unit Testing Scope Application Pay for an Item Test Double Virtual Provider Firewall Contract Testing Scope Integration Testing Scope REST over HTTPS External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
• Unit tests should cover testing for the various functions that the service defines. Including a TDD development can help here to ensure that the requirements are clearly validated during unit testing. Unit test should also ensure that data persistence within the service is taken care of and passed on to other services that it might interact with.<|endoftext|>• Contract testing – In this example, consumers need to be set up by using tools that help define contracts. Also, the expectations from a consumer’s perspective need to be understood. The consumer should be well-defined and in line with the expectations in the live situation and contracts should be collated and agreed upon.<|endoftext|> Once the consumer contracts are validated, a consumer-driven contract approach to testing can be followed. It is assumed that in this scenario, there would be multiple consumers and hence, individual consumer contracts for each of them. For example, in the above context, a local retailer and an international retailer can have • Scenario 3: Testing for a microservice that is to be exposed to public domain Consider an e-commerce application where retailers can check for availability of an item by invoking a Web API.<|endoftext|>different methods and parameters of invocation. Both need to be tested by setting up contracts accordingly. It is also assumed that consumers subscribe to the contract method of notifying the provider on the way they would consume the service and the expectations they have from it via consumer contracts.<|endoftext|>• E2E tests – minimal set of E2E tests would be expected in this case, since interactions with external third parties are key here Unit Testing Scope Application Consumer Driven Contract Testing Scope REST over HTTPS REST over HTTPS Customer Contract 1 Virtual Consumer 1 Customer Contract 2 Virtual Consumer 2 External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
Improvements in software architecture has led to fundamental changes in the way applications are designed and tested. Teams working on testing applications that are developed in the microservices architecture need to educate themselves on the behavior of such services, as well as stay informed of the latest tools and strategies that can help deal with the challenges they could potentially encounter. Furthermore, there should be a clear consensus on the test strategy and approach to testing. A consumer-driven contract approach is suggested as it is a better way to mitigate risk when services are exposed to an assorted and disparate set of consumers and as it further helps the provider in dealing with changes without impacting the consumer. Ensuring that the required amount of testing is focused at the correct time, with the most suitable tools, would ensure that organizations are able to deal with testing in such an environment and meet the demands of the customer.<|endoftext|>References : ihttps://www.mountaingoatsoftware.com/blog/the-forgotten-layer-of-the-test-automation-pyramid iihttps://www.sandbox.paypal.com In conclusion © 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Modernizing enterprise systems in healthcare
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
WHITE PAPER MODERNIZING ENTERPRISE SYSTEMS IN HEALTHCARE An end-to-end testing approach for greater predictability and quality Abstract As digital technologies, smart wearables and remote monitoring capabilities penetrate healthcare, traditional healthcare companies are unable to keep up with end-user expectations. Under pressure to adopt rapid transformation, these organizations are looking for robust and end-to-end testing procedures. This paper explains various end-to-end testing approaches within the four main modernization techniques for healthcare companies. The analysis presented here acts as a guideline for healthcare leaders to make strategic and informed decisions on how to modernize their systems based on the needs of their end-users.
---
Page: 2 / 8
---
Introduction Sustainability in healthcare is a looming challenge, particularly as the fusion of disruptive innovations such as digitization, Internet-of-Things and smart wearables enable remote and real-time health tracking, diagnosis and management. To succeed in such an environment, healthcare organizations rely heavily on IT. Thus, using the latest end-to-end testing approaches becomes essential to: • Ensure that all applications operate as a single entity with multi-module interactions • Maintain performance/non-functional scenarios within the desired limit • Identify bottlenecks and dependencies ahead of time so that the business can take appropriate actions Testing challenges in healthcare modernization Business transformation in healthcare is complex because of the challenges in maintaining integrity between different types of customer needs and health- related plans. Modernizing healthcare software applications mandates enabling multi-directional flow of information across multiple systems, which can complicate the entire healthcare workflow application. Further, failures or errors in systems outside the enterprise environment can adversely affect the performance of applications with which they are integrated. To address such challenges, it is important to determine the right method and types of end-to-end testing. This will optimize application performance by testing it across all layers from the front-end to the back-end along with its interfaces and endpoints.<|endoftext|>Typically, most healthcare organizations use multi-tier structures with multiple end-users, making end-to-end testing very complex. Launching a new product in such a multi-directional business scenario requires extensive user testing. Thus, to enable end-to-end (E2E) testing, health insurance companies must first understand what customers expect from their healthcare providers and identify how they can meet these expectations in shorter timelines.<|endoftext|>Fig 1: A typical multi-tier healthcare business Customer Profle Cloud Member Rule Firewall Web server Service Rule Oracle Big Data Web service Find Doctor Hospital Claim Web Page Payment Application Provider Web Page Business Team XML XML XML XML XML SOA XML SOAPUI Pictorial Diagram of Multi-tier Healthcare Business API Call Database Update SOA Call Service Call Consumers External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
Infosys solution End-to-end testing approaches for different modernization techniques Infosys leverages four modernization techniques to help healthcare organizations enable end-to-end testing. These techniques are: 1. Re-engineering technique Use-case: Best-suited in cases where companies need to digitize healthcare product marketing to different constituents of a state through online retail. This modernization technique is useful when venturing into new markets or when retiring obsolete technologies due to high maintenance costs. It leverages the following end-to-end testing approaches: • Simulation testing: User-centric interaction testing is performed based on different behavior events like usability testing, cross browser compatibility and mobile testing • Compliance testing: This testing is needed for security protocols and financial boundary testing as per mandates defined by state and central governments • Blend testing: This combines functional and structural testing into a single approach and is essential for any healthcare digitization transformation strategy Universal automation: This is a new approach that automates the acceptance of changed features in applications through browser recognition, payment gateways, etc.<|endoftext|>• Risk-based testing: This focuses on testing a few components or critical defects that are identified as high-risk functions, have significant complexity in business operations and can impact key features • Continuous automation testing: Based on continuous integration and continuous delivery, this testing is for real as well as virtual features that are proposed for projects in minor transition • Recognition accuracy testing: This tests non-textual data like images, pictorial figures, feelings, fingerprinting, etc., using a virtual augmented framework Benefits – This testing approach provides higher returns on investment by nearly 80- 90% in short spans within 5-7 iterations. It also improves co-ordination, accuracy and reusability of data in ensuing runs, thus providing a robust and reusable testing option through cutting-edge technology.<|endoftext|>Risks – Diversified technology exposure is critical to support such big bang transformation and limited technical knowledge may result in uncovering fewer quality issues. Further, rebuilding the enterprise framework can be costly.<|endoftext|>2. Replacing or Retiring technique Use case: Best-suited when one needs to remove contract and legal documentation from healthcare insurance and hospitals to a separate online portal.<|endoftext|>This modernization technique is used when there is a need for more control and accuracy. Migratory functions are clustered as units and can be renovated easily without disturbing other applications. Here, end-to-end testing focuses on components that undergo gradual replacement or are retired as described below: • Plug-and-play testing: This is usually executed when testing teams employ different types of tools for automation scripting or when different types of technologies are involved in testing • Web service-based testing: This is a mechanism or medium of communication by which two or more applications exchange data, irrespective of the underlying architecture and technology • Neutrality testing: This is typically used when the existing platform is replaced with a new one without altering the final business outcomes or end-user experiences • Parallel testing: This analyzes several applications or sub-elements of one application simultaneously and in the same instance using agile or waterfall models in order to reduce test time • Assembly testing: This reveals precise interactions among modules as per user requirements. It is used when functions are grouped into a logical entity and alliances are needed • Usability testing: Usability testing covers learnability, memorability, adeptness, and customer satisfaction indices to determine how easy to use the application is for end-users Benefits – This modernization approach provides more structure and control to end-to-end testing with15-20% effort reduction. It ensures effective application testing with the option of reverting to native state on-demand when needed. Further, it requires only 5-7% effort for automation changes during build. Risks – Project overrun can occur without proper supervision. Additionally, it requires repeated testing of the same regression suite even for small deployments. External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
3. Re-fronting technique Use case: Best-suited when adding encryption logic protocol is required for sensitive claim-related information passing through a web service.<|endoftext|>This approach is used when end-users want to use the same data efficiently and quickly without investing in expensive infrastructure set-up. It covers virtualization, non-functional testing and regression testing as described below: • Virtualization testing: This simulates multiple users to check the performance of the new technology while it interacts with existing applications • Non-functional testing: Certain features like technology compatibility, platform integrity, exception handling, help analysis, impact exploration, and application availability falls under the purview of non-functional testing • Regression testing: Regression re-run approach is used when there is a slight change in functionality but the overall system behavior has not changed Benefits – This approach simplifies localized defect resolution. Here, end- to-end testing is more stable as changes are limited and specific. Further, the cost of running E2E test cases is lower as the regression suite can be easily automated.<|endoftext|>Risks – Frequent patch changes can lower productivity and increase maintenance cost. Further, repeated testing of the same emergency bug fix can reduce long-term RoI. 4.Re-platforming technique Use case: Best-suited when upgrading billing/payments databases to recent |
Continue # Infosys Whitepaper
versions is needed due to license renewals.<|endoftext|>Re-platforming of application modernization is primarily done in areas where businesses aim to minimize maintenance costs with cost effective technology. This modernization technique uses migration, acceptance, intrusive, and volume testing approaches as described below: • Migration testing: This is used when ensuring data integrity is the most important factor during technology upgrades • Acceptance testing: Acceptance testing ensures that applications being moved to a new platform have the same recognition intensities as before • Intrusive testing: Also used as negative testing, this approach determines the effect of hosting unexpected variables into the system or overall application • Volume testing: This evaluates the stability of applications by ingesting a huge number of records Benefits – This approach simplifies end- to-end testing as predicted business outcomes are achieved. It lowers testing cost, thus reducing total cost of operations and time-to-market and does not require additional infrastructure or specialized licensing tools. Further, it increases testing penetration by reusing scenarios, data and execution strategies.<|endoftext|>Risks – Re-platforming may warrant additional testing of critical business flows to ensure functional defects are caught early to avoid cost impact. Also conducting testing in the new platform requires proper training.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
Modernization techniques for end-to-end testing approaches Testing Approach Re-engineer Remediate or replace Re-front Re-platform Simulation testing Yes Yes No No Compliance testing Yes Yes No No Blend testing Yes Yes No No Universal testing Yes Yes Yes No Risk-based testing Yes Yes No No Plug-and-play testing No Yes Yes Yes Web service-based testing Yes Yes Yes Yes Agile testing No Yes Maybe No Parallel testing No Yes Yes No Virtualization testing No No Yes Yes Usability testing Yes Yes No No Recognition testing Yes No Yes Maybe Regression testing No No Yes Yes Migration testing Yes No Maybe Yes Assembly testing Yes Yes Yes No Volume testing Yes No No Yes Intrusive testing Yes No No No Acceptance testing Yes Maybe No Yes Comparative analysis of various testing approaches The following table depicts a matrix of end-to-end test approaches along with modernization techniques in healthcare. The matrix illustrates which E2E testing method is best-suited to the four different modernization techniques. While ‘yes’ and ‘no’ represent absolute outcomes, it is important to note that ‘maybe’ results depend on how critical the business needs are and whether the approach is actually cost-effective when considering the overall business operations.<|endoftext|>Table 1: Comparison of different end-to-end test approaches and modenization techniques External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
Case study Business need – A healthcare company with three verticals for policyholders, doctors and claims wanted to remodel their business portfolio to adopt new technologies and meet customer demand. While the policy procuring system and the consumer premium collection system were digitized, the company decided to re-engineer the claims processing system from DB2 to a big data-based system. As the claims vertical interacted with doctors and hospital web portals, they also wanted to gradually transform the portals component-wise in order to give doctors sufficient time to acquaint themselves with the new digitized system.<|endoftext|>Solution approach – To support the company’s hybrid transformation project, we used an end-to-end testing strategy that leveraged complementary test approaches from different modernization techniques across the three verticals, as described below: • For policyholders: The customer procurement system was treated with a combination of re-front modernization and big bang transformation. Blend testing was used with continuous automation followed by web service testing and assembly testing • For providers (doctors/hospitals): Here, we used a combination of assembly, regression rerun and agile testing to ensure gradual changes since agile testing methodology is best-suited for scenarios where constituents are deployed slowly over a period of time • For claims: Claims is a crucial vertical. Thus, skeleton scripts, virtualization and migration testing methods were used for their stability and lower risk when migrating from DB2 to big data As each vertical of the company has different business needs, different types of modernization were needed to suit various end-users.<|endoftext|>Fig 2: End-to-end testing approach for the three verticals External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
The road ahead In future, more consumers will embrace digitization and the uber connectedness of wearables and mobile devices that can track the user’s health through in-built monitoring systems. Thus, as a higher number of service operators orchestrate multiple domains, we can expect to see greater challenges ahead for end-to-end testing. This makes it imperative to leverage DevOps and analytics-based testing capabilities along with modernization approaches.<|endoftext|>Conclusion Disruptive technologies are creating avenues for healthcare providers to issue virtual treatment, advice and services. However, this requires some degree of IT modernization for which end-to-end testing is crucial. There are various approaches that can be used to enable re-engineering, replacing, re- fronting, and re-platforming modernization techniques. Each testing approach has its benefits and risks and must be chosen based on the end-user expectations. Thus, it is important for business leaders to be aware of these in order to make the right decision for their IT modernization journey. The right approach can offer significant cost advantages, accelerate time-to-market and ensure seamless end-user experience.
Authors Dipayan Bhattacharya Project Manager Infosys Validation Solutions Amit Kumar Nanda Group Project Manager Infosys Validation Solutions External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
References Egui Zhu1, M., Anneliese Lilienthal1, M., Shluzas, L. A., Masiello, I., & Zary, N. (2015). Design of Mobile Augmented Reality in Health Care Education: A Theory- Driven Framework. JMIR Medical Education, 1-17.<|endoftext|>Flahiff, J. (2011, June 6). Integrating Agile into a Waterfall World. The InfoQ Podcast.<|endoftext|>Hewlett-Packard Development Company. (2012). Survival guide for testing modern applications. Hewlett-Packard Development Company.<|endoftext|>Infosys. (n.d.). https://www.infosys.com/it-services/validation-solutions/white-papers/documents/end-test-automation.pdf.<|endoftext|>Infosys. (n.d.). https://www.infosys.com/it-services/validation-solutions/white-papers/documents/qa-strategy-succeed.pdf.<|endoftext|>Karan Maini. (n.d.). Legacy Modernization. Retrieved from https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&cad=rja&uact=8&ved= 0ahUKEwjSs5HssZnTAhUFSY8KHeKeDBoQFggiMAI&url=http%3A%2F%2Fdoczz.net%2Fdoc%2F5515301%2Flegacy-modernization&usg=AFQjCNFDRMLgm slUWqaqhyCqR7XahprSBQ&bvm=bv.152174688,d.c2I NetReach Technologies. (2010). Legacy Application Modernization Balancing Risk vs. Reward.<|endoftext|>Quality Thoughts. (2016, Oct 10). Quality Thoughts. Retrieved from http://www.qualitythought.in/courses/webservices-testing/ Selenium Labs. (2016, Oct 10). Web Services SOA Testing. Retrieved from http://www.seleniumlabs.in/web-services-soa-testing-in-bangalore.html Slide share. (2016, Oct 10). Webservices Testing. Retrieved from http://www.slideshare.net/AmitChanna/webservices-testing-a-changing-landscape Transvive. (2011). Migration Strategies & Methodologies. Toronto: Transvive.<|endoftext|>(n.d.). Retrieved from http://healthinsurancemedical.50webs.com/article1.html (n.d.). Retrieved from https://www.google.co.in/search?hl=en-IN&biw=1280&bih=866&tbm=isch&q=3+pillars+of+financial+security&oq=&gs_ l=# |
Continue # Infosys Whitepaper
imgrc=uQhgXTjoI03D8M%3A (n.d.). Retrieved from https://www.google.co.in/search?q=computer+server+clipart&sa=G&hl=en-IN&biw=1280&bih=866&tbm=isch&imgil=W9Nhy A0FXtloxM%253A%253B5cEEUU2VgnYvcM%253Bhttp25253A%25252F%25252Fpublicdomainvectors.org%25252Fen%25252Fpowerpoint-clip-art- server&source=iu&pf=m&fir (n.d.). Retrieved from http://www.google.co.in/imgres?imgurl=http://sacramentoappraisalblog.com/wp-content/uploads/2016/08/real-estate-market- future-sacramento-appraisal-blog-mage-purchased-and-used-with-permission-by-123rf-1.jpg&imgrefurl=http://paper.li/AngelaRunsAmuck/130652 (n.d.). Retrieved from https://www.google.co.in/search?q=virtual+reality+in+business&hl=en-IN&biw=1280&bih=866&source=lnms&tbm=isch&sa=X&ved =0ahUKEwjP9MWOnM3QAhXIRY8KHbjjBCwQ_AUICCgB#imgrc=VyCt_MfLFMQo0M%3A © 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys POV
Title: Next-gen Process Mining powers Oil & Gas transformation
Author: Infosys Consulting
Format: PDF 1.7
---
Page: 1 / 13
---
An Infosys Consulting Perspective By Sachin Padhye, Naveen Kamakoti, Shruti Jayaraman and Sohini De Consulting@Infosys.com | InfosysConsultingInsights.com Next-gen Process Mining powers Oil & Gas transformation
---
Page: 2 / 13
---
Oil & Gas transformation | © 2023 Infosys Consulting 2 Process Mining technology: A key enabler to transform Oil & Gas Technology continues to be a reliant and indispensable enabler to transform operations of Oil and Gas companies. As a growing trend, the broader intent of incorporating technology is the use of operational data to support analytics and fact-based decision-making. However, increasing complexity of core and supplementary processes, coupled with limited agility of legacy and monolithic IT systems adds a constant challenge to continuous process improvement.<|endoftext|>The agility of business processes and operations depends on the ability to capture real-time data and perform large scale analyses to generate actionable insights on-demand and help steer and nudge key metrics and key performance indicators (KPIs). One such technology framework with ever-growing adoption is Process Mining, especially due to the evolution from process-discovery-based limited applications to centralized platforms for integrated process automation.<|endoftext|>
---
Page: 3 / 13
---
3 Process Mining uses detailed data from business processes Process Mining is the practice of using data from various sources to analyze, baseline and improve business processes. The concept of Process Mining is built on the pillars of analysis techniques using artificial intelligence (AI) and machine learning (ML). It is an approach to analyze, optimize, and improve complex operational processes. Powered by event data logs and data science tools, Process Mining helps identify process variations and bottlenecks and gathers quantitative insights in process flows. It also helps address performance and compliance-related issues in processes. The following high-level steps are involved in a typical process mining lifecycle journey: Step Description Tools used 1. Data collection Collect data from various sources, such as event logs, databases, operational data stores.<|endoftext|>Data extraction tools, such as ETL tools, log parsers, or database connectors.<|endoftext|>2. Data pre- processing Clean, filter, and normalize data to ensure consistency and accuracy.<|endoftext|>Data cleaning and preparation tools, such as Python, or R scripts.<|endoftext|>3. Process discovery Create a process model based on the collected data.<|endoftext|>Process Mining tools, such as Disco, ProM, or Celonis.<|endoftext|>4. Conformance checking Compare the process model with the collected data to identify deviations, errors, or inefficiencies in the process.<|endoftext|>Conformance checking tools, such as Disco, ProM, or Celonis.<|endoftext|>5. Process enhancement Optimize the process model to improve efficiency, reduce costs, and enhance quality.<|endoftext|>Process simulation and optimization tools, such as Arena, Simul8, or ProModel.<|endoftext|>6. Process monitoring Continuously track and analyze process data to identify potential issues, bottlenecks, or opportunities for improvement.<|endoftext|>Process monitoring tools, such as Celonis, Splunk, ELK, or Graylog.<|endoftext|>7. Process visualization Create graphical representations of the process model and process data to help stakeholders understand the process and identify areas for improvement.<|endoftext|>Data visualization tools, such as Celonis, Tableau, Power BI, or QlikView.<|endoftext|>Oil & Gas transformation | © 2023 Infosys Consulting
---
Page: 4 / 13
---
4 Evolution of Process Mining From being a niche technology used in research-oriented projects to a completely integrated cross-functional collaboration platform, Process Mining has evolved and matured for broad adoption. Here is a summary of this evolution: Oil & Gas transformation | © 2023 Infosys Consulting Process discovery Process conformance User-centered process Integrated process automation Time period First Generation Second Generation Third Generation Fourth Generation 1988 – 2004 2004 – 2011 2011 – 2016 2016 – Present Summary Focus on discovery of process models from event logs Introduction of conformance checking and process enhancements Shift towards more user- centered and interactive approaches Expansion of process mining beyond event logs to include other types of data and processes Use of process discovery algorithms and process tree visualizations Integration of multiple perspectives and data sources Focus on business process management and improvement Integration of process mining with other technologies such as AI, IoT, and blockchain Limited support for large and complex processes Focus on quality control, compliance, and audit trails Integration of social, organizational, and environmental factors Increased focus on automation, robotics, and digital transformation Challenges in handling noise, concurrency, and infrequent behavior Use of data mining and machine learning techniques Increased emphasis on big data, cloud computing, and distributed systems Development of new techniques such as predictive process monitoring and prescriptive analytics People Primarily academic researchers and process experts Involvement of business stakeholders and end-users in process mining projects Involvement of a wider range of stakeholders including end-users, IT staff, and top management Involvement of a wide range of stakeholders including business users, IT staff, data scientists, and process experts Minimal involvement of business stakeholders and end-users Increasing emphasis on collaboration and communication Greater emphasis on user needs and user experience Greater emphasis on cross- functional collaboration and co-creation Process Emphasis on process modeling and analysis Shift towards process improvement and optimization Increased integration of process mining with business strategy and management practices Integration of process mining with digital transformation and innovation initiatives Limited focus on process improvement and optimization Greater attention to business objectives and value creation Greater emphasis on continuous improvement and innovation Systems Basic computing tools and algorithms Development of more sophisticated algorithms and methods Greater use of cloud computing, big data, and advanced analytics Integration of AI, IoT, Blockchain, and Advanced Analytics Primarily desktop- based software Increased use of enterprise-level software systems Integration with other digital technologies such as social media and mobile devices Use of event logs and basic data mining techniques Integration of multiple data sources and formats Greater use of process automation and robotic process automation (RPA)
---
Page: 5 / 13
---
5 Process Mining impacts multiple areas in Oil & Gas The Oil and Gas industry is complex and dynamic with significant data generated across all areas. For Oil and Gas companies, Process Mining can be particularly important because of the complex and highly regulated nature of their operations. Here are some specific ways in which Process Mining can benefit Oil and Gas companies: Oil & Gas transformation | © 2023 Infosys Consulting Value levers Impact of Process Mining Operational efficiency Process Mining can help identify inefficiencies in processes, such as bottlenecks or unnecessary steps, and suggest ways to streamline them. This can lead to cost savings and better use of resources.<|endoftext|>Regulatory compliance Oil and Gas companies are subject to numerous regulations and standards, such as those related to environmental protection and worker safety. Process Mining can help ensure that these regulations are being followed and identify areas where improvements are needed.<|endoftext|>Operational safety Safety is a top priority for Oil and Gas companies, and Process Mining can help identify potential hazards and risks.<|endoftext|>By analyzing data from sensors, equipment, and other sources, companies can identify patterns that may indicate an increased risk of accidents or equipment failure.<|endoftext|>Optimized maintenance Process Mining can help companies optimize maintenance schedules by analyzing data from equipment and other sources to identify when maintenance is needed. This can help prevent unplanned downtime and reduce maintenance costs.<|endoftext|>Customer satisfaction Oil and Gas companies may interact with customers in various ways, such as through fuel delivery or service stations. Process Mining can help companies understand how customers interact with their services and identify ways to improve the customer experience.<|endoftext|>
---
Page: 6 / 13
---
6 Pre-requisites for Process Mining Certain conditions need to be met before leveraging Process Mining effectively. These can broadly be grouped under People, Process and Technology.<|endoftext|>Oil & Gas transformation | © 2023 Infosys Consulting Who can help manage the organizational changes that may result from Process Mining initiatives. BUSINESS ANALYSTS With domain knowledge of the processes to be analyzed. With expertise in data management and system integration. With expertise in data analysis and statistical modelling. Or subject matter experts who can provide feedback on the accuracy and relevance of Process Mining results. DATA SCIENTISTS IT PROFESSIONALS |
Continue # Infosys POV
PROCESS OWNERS CHANGE MANAGEMENT EXPERTS LEADERSHIP SUPPORT People Process Compliance with legal and regulatory requirements, such as data privacy laws. PROCESSES Well-defined processes with documented workflows and procedures. Availability of the required hardware or software infrastructure to support Process Mining activities Access to event logs or other data sources that capture process data. Alignment with the organization’s strategic objectives and goals. DATA CAPTURE TECH INFRASTRUCTURE STRATEGIC OBJECTIVES COMPLIANCE AGILITY Organization’s ability and culture to adopt new frameworks for continuous improvement.
---
Page: 7 / 13
---
7 Oil & Gas transformation | © 2023 Infosys Consulting System Visualization software to create dashboards and reports. ACCESS TO DATA Access to digitized processes and/or processes with event/case data and relevant data sources. This includes, event logs, databases, and other data repositories. Data cleaning, transformation, and normalization tools to prepare data for analysis. Process Mining software to extract and analyze process data. Process modelling software to create process model. PROCESS DATA DATA TOOLS PROCESS MODEL VISUALIZATION INVESTMENT Continued investment in technology platforms and relevant features.
---
Page: 8 / 13
---
8 Case studies The following case studies cite instances where Process Mining helped a US-based Oil and Gas major realize efficiencies and optimize resources using Celonis.<|endoftext|>Oil & Gas transformation | © 2023 Infosys Consulting Approach • Key AS-IS process flows for these processes were modeled in ARIS to begin with. This provided an understanding of the current pain points and areas of improvement. • This process model was leveraged to identify the data availability in applications across each of the steps.<|endoftext|>• The journey: A case was created through all its states and the data captured from the previous step was mapped against this to ensure data consistency.<|endoftext|>• This data was imported into the Celonis Execution Management system to create a data model. • Based on this data model, multiple process analysis dashboards and components were created to track various metrics and KPIs across key dimensions such as time, vendors, locations. Process Mining in upstream logistics
---
Page: 9 / 13
---
9 Oil & Gas transformation | © 2023 Infosys Consulting Business/process area Common challenges Potential process mining gains Value levers impacted Standard enterprise processes (order-to- cash, procure-to-pay) Manual interventions Reduction of TAT Operational efficiency Form corrections Improve no-touch processing Regulatory compliance Rate changes, data mismatches Automation Customer satisfaction Supply chain management High complexity of supply chain Reduction of process lead time Operational efficiency Visibility is limited among all stakeholders Reduction of cost by removing bottlenecks Optimized maintenance Best practices are not well-defined Full transparency of process Customer satisfaction Vessel schedule optimization Multiple rigs covered by same vessel Effective route planning to reduce fuel costs and optimize time Operational efficiency Route planning done at the last minute Operational safety Optimized maintenance Helicopter schedule optimization High cost due to over utilization Incorporate best practices for utilization Operational efficiency Unnoticed maintenance risks Monitoring risks Regulatory compliance Operational safety Optimized maintenance Warehouse management Warehouse layout inefficient Root cause analysis for layout Operational efficiency Lack of process automation Forecasting data for inventory utilization and avoiding stock outs Customer satisfaction Warehouse inventory inaccuracy Enhanced customer management Warehouse utilization inaccuracy Fleet management High fuel cost Improved fleet efficiency and routing Operational efficiency Under-utilized assets KPI monitoring to improve utilization Customer satisfaction Vendor management Manual processes, poor automation SLA improvement Operational efficiency Rental costs high and equipment under- utilized Contract visibility and optimization Customer satisfaction End-to-end system integration not available
---
Page: 10 / 13
---
10 Oil & Gas transformation | © 2023 Infosys Consulting Reference industry use cases Large integrated Oil & Gas major One of the largest Oil and Gas companies in the world has been using Process Mining to improve the efficiency of its drilling operations. By analyzing data from drilling rigs, this company was able to identify inefficiencies and areas for improvement, such as reducing idle time and optimizing drilling parameters. As a result, the firm was able to reduce drilling time and costs while improving safety and environmental performance.<|endoftext|>A European Oil & Gas company This company used Process Mining to optimize its maintenance processes for offshore platforms. By analyzing maintenance data, the firm was able to identify patterns and trends which improved the reliability of its equipment, reduced downtime, and lowered maintenance costs. The company also used Process Mining to identify opportunities for process standardization and optimization, resulting in further improvements in efficiency and cost savings.<|endoftext|>A large National Oil Corporation (NOC) This NOC used Process Mining to improve its customer service processes. By analyzing customer service data, the NOC was able to identify areas where it could improve its service levels, such as reducing response times and increasing the accuracy of billing. The company also used Process Mining to optimize its meter reading processes, resulting in significant cost savings.<|endoftext|>
---
Page: 11 / 13
---
Process Mining encourages sustainable growth Oil and Gas companies operate in a complex environment with multiple interconnected processes, making it challenging to identify inefficiencies and areas for improvement. Process Mining provides a valuable tool for these companies to gain insights into their operational processes by analyzing data from various sources. By applying Process Mining techniques, Oil and Gas companies can identify bottlenecks, reduce costs, improve efficiency, and enhance the quality of their products and services. The benefits of Process Mining include improved compliance, enhanced decision-making, and increased operational efficiency. Therefore, implementing this technology can help Oil and Gas companies stay competitive and achieve sustainable growth in an ever-changing industry.<|endoftext|>11 Oil & Gas transformation | © 2023 Infosys Consulting
---
Page: 12 / 13
---
MEET THE EXPERTS SACHIN PADHYE Associate Partner, SURE Sachin.Padhye@infosys.com 12 Sachin works with large Oil and Gas companies in the upstream, midstream, and downstream areas to frame their digital strategy across customer and employee experiences. He helps clients quantify value, beginning with industry opportunities and ending with decisions built with big data, analytical tools and visualizations and narratives. His current focus is digital data monetization, where he helps companies put a monetary value to the data that is used to execute their digital strategy. NAVEEN KAMAKOTI Principal, SURE Venkata_Kamakoti@infosys.com Naveen has over 18 years’ experience in digital business transformation initiatives, focusing on process consulting, re-engineering and mining, as well as business architecture and consulting across information and professional services, plus Oil & Gas (upstream) domains. He leads the process consulting and transformation community of practice for Infosys Consulting.<|endoftext|>SHRUTI JAYARAMAN Senior Consultant, SURE Shruti.Jayaraman@infosys.com Shruti has four years’ experience in business process improvement and digital transformation initiatives with a focus on process modelling, analysis, and mining. She’s worked with upstream Oil & Gas clientele, across financial planning, process design and optimization, third-party hiring and government reporting areas for the last two years. She has administered trainings in process modelling using ARIS and has worked in Agile methodologies. SOHINI DE Consultant, SURE Sohini.De@infosys.com Sohini has over four years’ experience in process transformation initiatives focusing on business process improvement, process design, modeling and mining. She has two years’ experience in the upstream energy industry in marine logistics, and integrity inspection. She has conducted trainings in ARIS Designer platform for process modeling and has hands-on experience working in Agile methodologies. Oil & Gas transformation | © 2023 Infosys Consulting
---
Page: 13 / 13
---
consulting@Infosys.com InfosysConsultingInsights.com LinkedIn: /company/infosysconsulting Twitter: @infosysconsltng About Infosys Consulting Infosys Consulting is a global management consulting firm helping some of the world’s most recognizable brands transform and innovate. Our consultants are industry experts that lead complex change agendas driven by disruptive technology. With offices in 20 countries and backed by the power of the global Infosys brand, our teams help the C- suite navigate today’s digital landscape to win market share and create shareholder value for lasting competitive advantage. To see our ideas in action, or to join a new type of consulting |
Continue # Infosys POV
firm, visit us at www.InfosysConsultingInsights.com. For more information, contact consulting@infosys.com © 2022 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names, and other such intellectual property rights mentioned in this document. Except as expressly permitted, neither this document nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printed, photocopied, recorded or otherwise, without the prior permission of Infosys Limited and/or any named intellectual property rights holders under this document.
***
|
# Infosys Whitepaper
Title: Trends in Performance Testing and Engineering – Perform or Perish
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 4
---
WHITE PAPER TRENDS IN PERFORMANCE TESTING AND ENGINEERING – PERFORM OR PERISH Hemalatha Murugesan
---
Page: 2 / 4
---
One is spoilt for choices in today’s high consumerism and materialistic world leaving the end users both highly excited, vulnerable as well as extremely demanding. The array and diversity of choices is not just limited to technology, gadgets, smartphones, wearable devices, sports vehicles, FMCG goods, white goods, tourism, food choices etc. but is extensible, penetrating in every single aspect of one’s day to day life. In today’s world, no business can survive if one’s product/ service/training or any item is taken to the market without being online – aka Digitization, Mobilization. Stepping two decades back, one wonders how business was being done and reached various parts of the globe! With intense competition and aggression to extend the market footprint, every organization is launching multiple products or services catering to different user groups, age sectors, geo’s based launches, customization, personalization or rather “mood based” coupled with analytics, user preferences, predictions, etc. Businesses and IT organizations are moving at rapid pace to roll out their launches using the latest cutting edge technology and migration to newer technologies with the sole objective to ensure that they do not only retain their existing customers but also add to their base and be market- dominant leader.<|endoftext|>As such, every application that is catering to diverse populations 24x7, 365 days a year must be Available, Scalable for future growth, Predictable and Reliable, lest the end user lose their tolerance. What was earlier 8 seconds being the norm for a page to be rendered has now reduced to less than 2 seconds or milliseconds and almost all launches going the “app” way, the response time expected is to be in milliseconds.<|endoftext|>Devops, Agile, SMAC, migration to cloud, VPN, Big Data, MongoDB, Cassandra, - phew – the list is endless with newer technology, tools being launched by the day to address to ever expanding technology landscape. The rush to absorb these technologies is also increasing leading to high vulnerability on the application’s performance. There is a significant change in the way Performance Testing and Engineering including monitoring is being performed which is continuously evolving and becoming more complex.<|endoftext|>With increased digitization and mobilization being the norm, data analytics and testing to scale would play a major role in application performance to ensure better customer experience is provided. DevOps and Agile development will “shift left” forcing Performance Testing and Engineering teams to make early assumptions on customer behaviors and their needs, viable experiences and growth spikes to run tests quickly and validate those assumptions. Early predictability leveraging analytics on the application performance will change gears in the way we do or approach performance testing and engineering.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 3 / 4
---
Performance Driven Development (PDD) i.e. PE focus right from requirements to production rollout to post production, monitoring are one of the key trends noticed as it helps in early bottleneck identifications and tuning of the same. With Devops adoption having a close handshake between Development and Operations teams to address real production volumetric, PDD helps achieve it to a large extent. This in turn demands adoption of APM tools and methodologies.<|endoftext|>The focus has now shifted for early involvement of Performance Engineering in the application lifecycle – Early Validation of PE proactively and not leaving it to be addressed prior to roll out which was a reactive approach followed earlier. Due to this, the industry is seeing increasing launches of tools/processes supporting proactive PE approach.<|endoftext|>Businesses are demanding launches at faster pace with high Availability and Resiliency, yet no compromise on quality and security. All this at less TCO! Automation is the key across all SDLC and widely prevalent in testing types. As such every activity be in NFR gathering phases, scripting, modeling, test environment setup, releases, configuration management, etc. is getting automated which is inclusive of performance testing and engineering activities through these phases as well. Performance Engineering framework adapting to the agile/CI CD methodology for Web-based, Thick Client, batch job-based apps etc. needs to be developed, suited to the ever-changing technology landscape.<|endoftext|>Financial institutions have been one of the front runners in IT adoption where they would need to meet the regulatory compliances inclusive of performance. With multiple banking packages/products to choose from trading platforms and investment management products like Appian Way, middle and back- office products like Alteryx, Analytical Workbenches, etc., clients are looking for standard benchmarks/baselines of these products and its impact on PE before rolling out full-blown implementation. With almost all the apps on Mobile channels to interact with their systems, there is an intense need to do PE at every stage, at every component and layer and across all stacks. Few impacting trends seen are: Omni-channel retail customer experience – Performance testing and engineering to ensure consistent user experience across various touch points for application rewrite or new application development projects.<|endoftext|>Technology and infrastructure rationalization – Mostly driven by cost optimization and compliance requirements PT&E done to ensure zero disruption in service and user experience in data center migration/consolidation, technology stack upgrade or movement from on-premise to cloud.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 4 / 4
---
Conclusion For any organization to survive in today’s competitive world, it is important that the products/applications are Scalable, Predictable, and Available exciting the user, thereby ensuring loyalty as well as converting to business. However rich the application features are, when functionally tested, if the application is not responding to the expectations of the user, it is but natural to lose the customer. With changing and demanding trends, it is important that Performance testing and Engineering are considered at all layers and components as well for successive use of the products launched.<|endoftext|>Hemalatha Murugesan is currently heading the Performance Testing and Engineering in IVS at Infosys. She has been involved in setting up, pioneering, incubation and evolving emerging testing services like cloud testing, test data management, Infrastructure testing, virtualization, TEMS and other upcoming specialized services, etc., at Infosys. Hemalatha has been instrumental in developing the Enterprise Performance Testing Solutions which offers performance testing solutions & services and has also setup the state-of-art performance testing lab, both at Infosys.<|endoftext|>Bulk customer data handling-Retail and institutional customers are given more control to deal with their data. As a result of which interfaces such as dashboard, search, profiles, and homepages are becoming more interactive and data-heavy. PT&E is ensuring the performance SLAs are within acceptable limits in all user interactions.<|endoftext|>Tackle Integration Challenges – PT&E has been carried out to deal with scalability and performance issues arising due to enterprise, partner integration, middleware upgrades, etc.<|endoftext|>With intense pressure to reduce cost, banks are looking at embracing Clouds, DC consolidation and solutions around it. Consolidation of their LOB’s, tools by encouraging COE/NFT factory is setup to reduce cost. Banks are also moving to deploying software’s like SAP, PEGA, and Siebel etc. due to their low maintenance cost and better predictable quality compared to home-grown solutions. Besides, PE for apps hosted in Cloud and Virtualized environments is also picking up due to the on-demand resource provisioning and sharable hardware infrastructure that minimizes TCO. Performance simulation and Engineering of Day in the Life, for e.g., in a line of business through end-to-end PT of disparate systems analyzed. An example work-flow of a Mortgage loan, Mutual Fund, Credit rating process etc. is assessed for performance simulation.<|endoftext|>While the request has always been to have the exact production environment with right volume for Performance testing but the move is to Test Right with the Next Best which has predictable performance on the production environment replayed. Hardware Capacity planning and seamless integration to PT/PE framework especially |
Continue # Infosys Whitepaper
for new Automated Infrastructure spawning through CHEF/RECIPE, automated platform build-outs and other large Infragistics-related programs owing to Merger & Acquisition, Data Center migration etc.<|endoftext|>DB virtualization for PT&E also seems to be another emerging trend, though not implemented on a large scale today as compared to service virtualization. Service virtualization and Agile or component PT or layered performance testing and engineering also are gaining prevalence as there will be so many components and interfaces in financial products and Production Monitoring, Capacity Prediction Modeling based on that. Another trend we are seeing in Retail space is that applications built using Microservice, Docker Container etc. which requires tweaking monitoring and analysis approach. An interesting emerging trend is Hybrid datacenter approach i.e. part of the system is hosted in Cloud and while part of it is hosted in a Permanent data center. This would require expanding the list of performance KPI to cover all aspects of both the DCs.In some cases, we are also seeing hybrid datacenter approach e.g. part of system in Cloud and part of it in on permanent data center. Again we need to expand list of performance KPI to cover all aspects of two DCs.<|endoftext|>An upcoming hot trend seen is front-end Performance testing and engineering due to RIA/Web 2.0 popularity and to provide same personalized user experience across various media.<|endoftext|>© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Performance testing Internet of Things (IoT)
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 4
---
VIEW POINT PERFORMANCE TESTING INTERNET OF THINGS (IOT) - Yakub Reddy Gurijala Senior Technology Architect
---
Page: 2 / 4
---
External Document © 2018 Infosys Limited Internet of things (IoT) is a network of systems, devices, and sensors which are connected and this connectivity enables these objects to share the data. It is a platform which allows to manage the data and controls the devices remotely based on the requirement.<|endoftext|>The IoT has gained momentum in the recent years due to Internet availability, cloud evaluation, and micro services evaluation. According to Gartner, IoT connected devices are growing at 30 percent year-on-year and there will be 20 billion devices connected by 2020 (more than human population). IoT business is growing at 22 percent year- on-year and will reach US$3010 billion. People, government, and business will be hugely affected by IoT in the coming years resulting in smart cities, smart homes, smart hospitals, and so on. IoT devices produce data continuously. This data needs to be saved and analyzed for future decisions and these decisions may be immediate or may be taken later by using business intelligence (BI) analytics. IoT helps in improving the operational performance and cost optimization. So achieve this, IoT systems must be built for high performance and scalalability. To measure these two key attributes of an IoT application, it is important to understand the business value for which it is built. In addition, to measure performance, it is necessary to simulate real-world workload models, which can be created using business requirements, historic data and future growth require- ments, type of devices, network conditions, usage patterns, and geographic spread. Application usage patterns are arrived by analyzing the IoT application logs for peak hours and normal hours. Using these data points, different workload conditions (real-world load test / simulation) can be created for peak usage, normal usage, future growth, and daylong / multiday simulations.<|endoftext|>IoT performance testing (PT) is little different from traditional performance testing. Following table illustrates differences between traditional PT vs IoT PT.<|endoftext|>Because of these differences, IoT PT poses a lot of challenges to performance engineers. Below sections will describe different challenges posed by IoT applications and Infosys solution elements for each of the challenges.<|endoftext|>Some of key differences between traditional PT and IoT PT: Key diferences Simulation Simulation of users Simulation of devices / sensors Scale Few hundred users to few thousand users Few thousand devices to few million devices Amount of data Sends and receives large amount of data per request Sends and receives minimal data per request but data is shared continuously with time interval Protocols Uses standard protocols to communicate Uses non-standard and new protocols to communicate Requests / responses In most of the cases, users create the requests and receive the response Generally IoT devices create the requests and receive response as well as request, and provide response BI Only few applications have BI as part of testing BI will be a part of IoT; needs to measure performance by applying loads on IoT app Traditional PT IoT PT
---
Page: 3 / 4
---
External Document © 2018 Infosys Limited Performance testing challenges Protocols and performance testing tool IoT does not have standard protocol set to establish the connectivity between IoT application and devices. IoT protocols used range from HTTP, AllJoyn, IoTivity, MQTT, CoAP, AMQP, and more. These protocols are still in the early phases of development and different IoT solution vendors come up with specific protocol standards (sets). These protocols are continuously evolving with IoT applications. Since these are new technologies / protocols, and current performance testing tools may or may not support them. Geographical spread and network conditions IoT devices / sensors are spread across the world and use different networks to connect to the IoT servers to send and receive the data. As part of performance testing, there is need to simulate devices from different locations (to simulate latency) with required network technolo- gies like 2G, 3G, 4G, Bluetooth, etc.<|endoftext|>Load conditions It is necessary to load test the applications by simulating real-world conditions. These patterns are complex in nature and it will be extremely difficult to collect and predict the data. To recreate real-world load conditions, we may land up simulating millions of devices.<|endoftext|>Real-time decision making Some IoT implementations may require the data from a device that needs to be processed at runtime and based on the data received, the corresponding decision is taken. These decisions are generally no- tifications / requests to different devices / sensors or different systems which perform particular action. As part of testing, these notifications / requests need to be monitored for performance (time taken to generate the notification / request from the data received by IoT application). IoT application monitoring and BI processing Monitoring is essential for any application. It helps understand the system behavior under real-world conditions. For IoT ap- plications, both the application and the backend BI systems need to be monitored. This will help understand data processing, both in terms of the volume and accuracy. Infosys IoT PT solution Infosys created a comprehensive framework using JMeter to support all the needs of IoT PT. Protocols and performance testing tool Infosys selected JMeter as performance test tool to conduct PT. JMeter already has support to most of the IoT protocols like HTTP, CoAP, AMQP, MQTT and Kafka. As IoT is an emerging area, new protocols are being developed over the time. To on-board new protocols Infosys has come up with a protocol framework using protocol SDK and extending the JMeter. Using these JMeter extensions, scripts can be prepared to simulate new protocol requests and devices.<|endoftext|>Geographical spread and network conditions To simulate geographical spread, JMeter is integrated with cloud solutions like Amazon web services (AWS) to setup the load generators across different geographies. Using AWS integration, JMeter is able to generate the traffic from different locations of the world to IoT application to mimic the geographical spread and network latency. Infosys has in-house IP-based solution, Infosys Network Simulation tool (iNITS), to simulate different network conditions required for any requests which use transmission control protocol (TCP). We have integrated iNITS solution with JMeter to simulate different network conditions required by IoT PT.<|endoftext|>Load conditions To collect the accurate real-world scenarios, Infosys developed different tools / frameworks like non-functional require- ments (NFR) questionnaire, workload modeling tools, and others. These tools / frameworks reduce the requirement gathering and collect the information more accurately. To simulate millions of devices, JMeter integrated with cloud using automated scripts. These scripts will create required number of load generators in cloud, setup the JMeter, copy the scripts, test data, execute the results, collect the results, shutdown the LG’s which are created, and process the results.<|endoftext|>Real-time decision making Notifications, which are sent to other devices / sensors / systems, need to be monitored using stubs / service virtualiza- tion technologies. IoT application logs are collected and analyzed for processing time and response time of the real-time processing and decision making scenarios under different load conditions.<|endoftext|>IoT application monitoring and BI processing Infosys created predefined process / performance metrics collection to monitor the systems (Web / app / database layers) deployed in cloud and data center. These metrics are analyzed to uncover possible performance bottlenecks. If BI systems were built using batch jobs, then enough test data needs to be created using performance test scripts and the batch jobs executed to monitor the BI system. If real-time BI systems were implemented using hot channels then, BI systems need to be monitored as a part of different performance tests by generating different amount of data per second / minute / hour. Using this approach, IoT applica- tions are comprehensively monitored and performance results are benchmarked against different load conditions.<|endoftext|>IoT PT resources Infosys presently has 1200+ performance testing resources having experience in testing different types of applications, technologies, and tools. And more than 500 employees have working experience on JMeter. Infosys has dedicated resources who are trained on IoT performance test frameworks (JMeter, new protocols, network simulation, and IoT monitoring). These resources continuously explore the |
Continue # Infosys Whitepaper
opportunities to improve the framework, tool, and protocols supported.<|endoftext|>
---
Page: 4 / 4
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected Supports diferent network simulations for all type of Protocols.<|endoftext|>Framework is available to onboard new protocols.<|endoftext|>As solution is based on open source tool, no License cost for performance test tool and cost for network simulation only.<|endoftext|>No need to have hardware as device simulation can be done from Cloud.<|endoftext|>Features Benefts Support for diferent communication protocols such as HTTP, REST over HTTP, MQTT, AMQP, CoAP, Kafka and Web Sockets.<|endoftext|>Supports cloud based load generation. Automated scripts available to generate the load from cloud.<|endoftext|>Faster time to market.<|endoftext|>Quick onboarding of new protocols.<|endoftext|>01 02 04 03 Infosys IoT PT – Key features and benefits Conclusion Infosys created a compressive solution for IoT performance testing, which covers specific needs / demands of IoT. Currently, solution supports all leading IoT protocols and network simulations. Infosys IoT performance solution is very cost-effective when compared to any standard performance test tool.<|endoftext|>We have dedicated workforce trained on IoT performance testing to support the growing demands of IoT PT. Using Infosys IoT PT solution, clients can save 80 to 90 percent tool cost and reduce go-to-market time by 20 percent.<|endoftext|>References http://www.gartner.com/newsroom/id/3165317 https://www.infosys.com/IT-services/validation-solutions/white-papers/Documents/successful-network-impact-testing.pdf
***
|
# Infosys Whitepaper
Title: QA Strategy to Succeed in the Digital Age
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 4
---
VIEW POINT QA STRATEGY TO SUCCEED IN THE DIGITAL AGE
---
Page: 2 / 4
---
“Digital” is the new buzzword for organizations. Across industries, organizations are at various stages of their digital transformation journeys. For some, this might mean reimagining their entire businesses around digital technologies; and for others, incorporating aspects of digital into their existing ways of working.<|endoftext|>Examples of this are abundant across industries. In retail, the margin between online and offline is blurring. Retailers are incorporating technologies such as augmented reality, beacons to provide interactive in-store experiences to users. Targeted campaigns are being delivered to customers based on their proximity to a store. On one hand, customers have the option to order online and pick up in-store; and on the other, they can pay in-store and have it delivered at home. In the insurance industry, companies are enabling customers to purchase insurance anytime, anywhere, using pre-populated information from their Facebook profiles.<|endoftext|>What this means is that IT is now at the forefront of business transformation. Technology is the driver for business improvements and hence, IT departments have a greater role to play in business success. Consequently, the stakes are higher than ever, for IT to deliver better value, faster and more efficiently.<|endoftext|>However, the path to a successful digital transformation is wrought with multiple challenges, of which, a key challenge is one that is inherent to the most important aspect of digital transformations – the changing nature of customer interactions. While the digital revolution brings to organizations, newer models and channels of interaction with customers, the success of businesses is also becoming increasingly dependent on the quality of these interactions. The end customer experience is now the single most important factor in a business’ success. Customers today are more demanding, and much more likely to switch loyalties if the customer experience is not up to their expectations. Thus, the most important factor for success of digital transformations is ensuring a superlative end customer experience through the quality assurance function.<|endoftext|>However, can traditional testing organizations that follow age-old ways, be able to provide quality assurance in the new scheme of things? To answer this, let us look at some of the imperatives of digital assurance.<|endoftext|>Focus on Customer Experience As discussed already, the nature of customer interactions has undergone a great transformation in the recent past. Businesses are increasingly engaging with customers through a multitude of channels such as web, mobile, and social media, in addition to the existing traditional channels. A single customer transaction can now span across multiple online and offline channels. Hence, customer experience across each of these channels is important; but so is providing similar and seamless experiences across all channels, as well as maintaining consistency in messaging all throughout. This also requires a change in the approach to quality assurance. QA needs to shift focus from the traditional functional validation, to more of customer experience validation, across the digital landscape. This requires a 360° view of quality, encompassing functional and non-functional aspects, and cutting across channels and technologies. The anytime-anywhere nature of customer transactions pose challenges in all aspects of testing. With the increase in online transactions, usage of cloud infrastructure, the multitude of interconnected applications and devices, and the advent of big data analytics, there are newer challenges to application security and data privacy. Ensuring the security of applications from any breaches, along with adherence to security and data privacy guidelines is essential for ensuring a good customer experience, and business continuity. Comprehensive security assurance is thus a key component of digital assurance. Application performance is another key determinant of success. Users are much more likely to uninstall an app or abandon an online transaction with the slightest of reductions in application performance. Unlike traditional QA, performance evaluation needs to be incorporated at all stages of the application development lifecycle. Performance evaluation needs to be augmented with performance monitoring in production to ensure availability of business critical applications. Strategies for compatibility, usability, and accessibility testing should also be optimized to cover multiple customer touch points and technologies like desktop, mobile, and other connected devices. There is also an increasing focus on providing personalized experiences to customers. In addition to functional validation, personalized content validation across channels, and validation of digital content and assets also needs to be incorporated. Another aspect of the digital world is the constant customer feedback and inputs, which have become important drivers for business decisions. Companies are co-creating products with customers, or using customer inputs to improve existing products. This is also now extending to using customer inputs to improve IT platforms and services. In this constantly evolving landscape, a continuous feedback mechanism is also important for QA organizations to understand the end customer requirements and preempt customer issues. End customer feedback, learnings from production, and findings from previous testing cycles can all serve as inputs to continuously improve testing effectiveness and efficiency, and provide a truly 360° view of application quality. External Document © 2018 Infosys Limited
---
Page: 3 / 4
---
Manage Complexity An important challenge that digital transformation brings about it is the increasing complexity of the application landscape. The IT landscape now needs to support multiple newer applications built on disparate technologies. The interconnectedness of applications, as well as the requirement to test them on different device configurations, pose {{ img-description : a group of people sitting around a table working with a laptop, in the style of light teal and crimson, uniformly staged images, light gray and navy, iso 200, yankeecore, focus on joints/connections, gravure printing }} Increase Agility With the digital revolution, newer technologies are being adopted at a much faster pace. Organizations are now trying to pilot newer and sometimes unproven technologies, in a bid to enhance their business. For QA teams to support this effectively, they have to be extremely nimble and quick to learn. Teams should be tuned-in on technological changes, be able to innovate quickly, and come up with optimal solutions for new testing challenges. In general, development cycles are getting progressively shorter, with businesses vying to provide better features, faster. Development methodologies are moving to Agile, and DevOps. Consequently, there is an increasing pressure on QA teams to reduce the turnaround time and deliver the code to production. It also has to be balanced with the requirement to support more and more devices and platforms. This needs a two-pronged approach to optimize testing requirements, as well as increase the speed of testing. With limited time to test, it is crucial to adopt methods to optimize testing requirements, so that the time is well spent on validating critical functionalities. While automation has been the key enabler to increase testing effectiveness, it should not be limited to test execution alone. It should also encompass the entire testing lifecycle – from requirements analysis to reporting. Efficiencies need to be built into the testing process by a combination of tools, accelerators, and reusable test artifacts. Early automation strategies can be deployed to ensure availability of automated test scripts for system testing. To conclude, an assurance strategy in the digital world has to address the following: • Focus on customer experience, rather than functional validation • Provide a 360° assurance, encompassing different aspects of testing as well as end-to-end validation • Focus on continuous learning and innovation • Continuously optimize and accelerate testing additional challenges for the QA teams. On one hand, assurance needs to be provided for all application layers, from the database, to the UI, to isolate issues; and on the other hand, end-to-end business process assurance encompassing multiple applications is equally crucial. The testing strategy should be able to balance these requirements and provide optimal test coverage, ensuring early isolation of issues. A well planned approach involving service virtualization, judicious mix of automation tools, test data management and optimized testing scope should be implemented.<|endoftext|>Thus, the need of the hour is a holistic assurance strategy encompassing all aspects of validation, which is also optimized for the changing application landscape.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 4 / 4
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infos |
Continue # Infosys Whitepaper
ys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Quantifying Customer Experience for Quality Assurance in the Digital Era
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
WHITE PAPER QUANTIFYING CUSTOMER EXPERIENCE FOR QUALITY ASSURANCE IN THE DIGITAL ERA Abstract Post the pandemic, the new normal situation demands an increased digitalization across all industry sectors. Ensuring top class customer experience became crucial for all digital customer interactions through multiple channels like web, mobile, chatbot, etc. Customer experience is an area in which neither the aesthetics nor the content can be compromised as that will lead to severe negative business impact. This paper explains various automation strategies that can enable QA teams to provide a unified experience to the end customers across multiple channels. The focus is to identify the key attributes of customer experience and suggest metrics that can be used to measure its effectiveness.<|endoftext|>{{ img-description : rateyourexperience with a palm and pen with four stars, in the style of interactive, smartphone footage, digital }}
---
Page: 2 / 8
---
External Document © 2022 Infosys Limited Introduction Customer experience has always been a dynamic topic as it is becoming more personalized day by day and varies according to individual preferences. It is hard to measure customer experience which make the work even more difficult for Quality Assurance teams. The factors which amplify the customer experience not only include the functional and visual factors like front end aesthetics, user interface, user experience, etc., but also include non-functional and social aspects like omnichannel engagements, social media presence, customer sentiments, accessibility, security, performance, etc.<|endoftext|>Enterprises encounter various challenges in providing a unified experience to their end customers across multiple channels such as: • Lack of information or mismatch in information • Quality of content is not up to the standard • Lack of usability in cross navigation to make it intuitive and self-guided • Consistent look and feel and functional flow across various channels • Improper content placement • Inappropriate format and alignment • Performance issues across local and global regions • Violation of security guidelines • Nonconformance to Accessibility as per the Web Content Accessibility Guidelines (WCAG) guidelines • Lack of social media integration Why do we need to measure the Customer Experience? Quality Assurance is required in all these areas of functional, nonfunctional, and social aspects of Customer Experience. Since, Customer Experience is hyper personalized in the digital era, a persona-based experience measurement is required. Conventional Quality Assurance practices need to be changed to evaluate all aspects of customers journey across multiple channels, comprehensively.<|endoftext|>Figure 1 Challenges in Quality Assurance of Customer Experience Lack of single view of factors afecting customer experience.<|endoftext|>Traditional Testing fails to adapt to real time learning, lacks feedback loop Lack of persona based test strategy Quantifable CX measurements not available Adapting experience unique to each customer Testing is inward focused rather than customer focused Testing based on biz/ technical requirements resulting in gaps in customer’s expectations Vast sea of social messages and user feedback data from social media platforms
---
Page: 3 / 8
---
External Document © 2022 Infosys Limited Experience Validation Needs to Cover Multiple Areas of a Customer Journey While organizations try to focus on enhancing the customer experience, there are various areas need to be validated and remediated independently for functional, nonfunctional, and social aspects. The current testing trend covers the basic functional and statistical aspects, emerging testing areas will cover behavioral aspects and focus more on providing customer centric approach like using AI for enhancing the quality of digital impression with personalized customizations. Below table provides information on areas where quality assurance is required along with the popular tools for automation.<|endoftext|>Sr No Area Key Aspects / Metrics Current Testing Trend Emerging Testing Trend Tools 1 Visual Conformance Webpage content alignment, font size, font color, web links, images, audio files, video files, forms, tabular content, color scheme, font scheme, navigation buttons, theme etc.<|endoftext|>A/B testing, Style guide check, Font check, Color check, Usability testing, Readability testing Persona based testing Siteimprove Applitools, SortSite 2 Content Checking whether the image, video, audio, text, tables, forms, links etc. are up to the standards.<|endoftext|>A/B Testing, Voice quality testing, Streaming media testing, Compatibility testing, Internationalization/ Localization testing Personalized UX Testing, CSS3 Animation testing, 2D Illustrations, AI powered translators Siteimprove, SortSite 3 Performance of webpage Loading speed, Timeto
Title, DNS lookup speed, Requests per second, Conversion rate, TimetoFirstByte, TimetoInteract, Error Rate Performance testing, Network testing, cross browser testing, multiple device testing, multiple OS testing Performance Engineering, AI in performance testing, Chaos Engineering GTMetrix, Pingdom Tool, Google Lighthouse, Web Page Test, etc.<|endoftext|>4 Security Conformance with security standards across geographies. Secured transactions, cyber security, biometric security, user account security Application security testing, Cyber Assurance, Biometric testing, Payment Testing Blockchain testing, Brain Computer Interface BCI testing, Penetration testing, Facial recognition Sucuri SiteCheck, Mozilla Observatory, Acunetix, Wapiti 5 Usability Navigation on website, visibility, readability, chatbot integrations, user interface Usability testing, Readability testing, Eye tracking, Screen reader validation, Chatbot testing AI led design testing, Emotion tracking, Movement tracking Hotjar, Google Anaytics, Delighted, SurveyMonkey, UserZoom 6 Web Accessibility Conformance to web accessibility guidelines as per geography Checking conformance to guidelines [Web Content Accessibility Guidelines (WCAG), Disability Discrimination Act (DDA) etc.) Persona based accessibility testing Level Access, AXE, Siteimprove, SortSite.<|endoftext|>7 Customer Analytics Net Promoter Score, Customer Effort Score, Customer Satisfaction, Customer Lifetime Value, Customer Turn Rate, Average Resolution Time, Conversion Rate, Percentage of new sessions, Pages per session Sentiment Analytics, Crowd testing, Real time analytics, social media analytics, IOT testing AR/ VR testing, Immersive testing Sprout Social, Buffer, Google Analytics, Hootsuite.<|endoftext|>8 Social Media Integration Clickthrough rate, measuring engagement, influence, brand awareness Measuring social media engagement, social media analytics AR/VR testing, Advertising Playbook, Streaming Data Validation Sprout Social, Buffer, Google Analytics, etc.<|endoftext|>Table 1 Holistic Customer Experience Validation and Trends
---
Page: 4 / 8
---
External Document © 2022 Infosys Limited Emerging Trends in Customer Experience Validation Below are few of the emerging trends that can help enhance the customer experience. QA team can use quantifiable attributes to understand where exactly their focus is required. Telemetry Analysis using AI/ML in Customer Experience Telemetry data collected from various sources can be utilized for analyzing the customer experience and implementing the appropriate corrective action. These sources could be the social media feeds, various testing tools mentioned in Table 1, web pages, etc. Analytics is normally done through custom built accelerators using AI/ML techniques. Some of the common analytics are listed below: • Sentiment Analytics: Sentiment of the message is analyzed as positive, negative, or neutral • Intent Analytics: Identifies intent as marketing, query, opinion etc.<|endoftext|>• Contextual Semantic Search (CSS): Intelligent Smart Search Algorithm which filters the messages into given concept. Unlike the keyword-based search, here the search is done on a dump of social media messages for a concept (e.g Price, Quality, etc.) using AI techniques. • Multilingual Sentiment Analytics: Analyze sentiment based on languages • Text Analytics, Text Cleansing, Clustering: Extracting meaning out of the text by language identification, sentence breaking, sentence clustering etc.<|endoftext|>• Response Tag Analysis: To filter pricing, performance, support issues • Named entity recognition (NER): To identify who is saying what on social media posts and classify • Feature Extraction from Text: Transform text using bag of words and bag-of-n- grams • Classification Algorithms: Classification algorithms assign the tags and create categories according to the content. It has broad applications such as sentiment analysis, topic labeling, spam detection, and intent detection.<|endoftext|>• Image analytics: - Identifying the context of the image using image analytics, categorizes the image and sort them according to gender, age, facial |
Continue # Infosys Whitepaper
expression, objects, actions, scenes, topic, and sentiment.<|endoftext|>Computer Vision Computer Vision helps to derive meaningful information from images, objects, and videos. With hyper personalization of customer experience, we need an intelligent and integrated customer experience which can be personalized by the people. While AI plays an important role in analyzing the data and recommend the corrective actions, Computer Vision helps to capture the objects, face expressions, etc. and the image processing technology can be leveraged to interpret the customer response.<|endoftext|>Chatbot A chatbot is an artificial intelligence software that can simulate a conversation (or chat) with a user. Chatbot has become a very important mode of communication and most of the enterprises use chatbots for their customer interactions, especially in the new normal scenario.<|endoftext|>Some of the metrics to measure customer experience using a chatbot are: 1. Customer Satisfaction: This metrics will determine the efficiency and effectiveness of chatbot. Questions which can be included in this can be: • Whether chatbot was able to understand the query of the customer? • Was the response provided to the specific query? • Whether the query was transferred to the specific agent in case on non-resolution of the query 2. Activity Volume: How frequently is the chatbot used? Is the usage of chatbot increasing or decreasing? 3. Completion Rates: This metric measures the amount of time the customer took. Also, the levels of question asked by the customer. It will measure the instance when the customer opted to get resolution from an agent and left the chatbot. This will help identify the opportunities to improve the chatbot further, improving the comprehension, scripts and adding other functionalities to the chatbot.<|endoftext|>4. Reuse Rates: This metric will provide the insight on the reuse of chatbot by the same customer. This will also enable to dive deep into the results of customer satisfaction metric, help us understand new user v/s old user usage ratio and allow us to conclude on re-usability and adaptability of chatbot by customers. 5. Speech Analytics Feedback: In this speech analytics can be used to examine customer interactions with service agents. Some of the specific elements to be noted include tone of the call, frustration level of customer, knowledge level of customer, ease of use etc.<|endoftext|>Measuring Tools Even though there are various tools available from startups like BotAnalytics, BotCore, CharBase, Dashbot, etc., most of the QA teams are measuring the Chatbot performance parameters through AI/ ML utilities.<|endoftext|>
---
Page: 5 / 8
---
External Document © 2022 Infosys Limited Alternative Reality Alternative Reality includes augmented reality (AR), virtual reality (VR) and mixed reality. AR is in many ways adding value to the customer experience of an enterprise by providing an interactive environment and helps them to stay ahead of their competitors. The data points used to measure it overlap with those of website and app metrics, with addition of a few new points to be measured.<|endoftext|>Some of the additional metrics to measure customer experience in Alternate Reality: 1. Dwell time: Total time spent on the platform. More time spent on platform being the positive outcome 2. Engagement: Interaction with the platform. More the engagement better is the outcome.<|endoftext|>3. Recall: Ability to remember. Higher recall rate indicates proper attention and guides us on the effectiveness of the platform 4. Sentiment: Reaction. Positive, Negative and Neutral. This will assist in understanding the sentiment.<|endoftext|>5. Hardware used: Desktop, laptop, tablet, mobile etc. Measuring Tools There is not much automation done in AR/ VR experience validation. Custom built utilities using Unity framework can be explored to measure the AR/ VR experience. Brain computer interface A brain computer interface (BCI) is a system that measures activity of the central nervous system (CNS) and converts it into artificial output that replaces, restores, enhances, supplements, or improves natural CNS output, and thereby changes the ongoing interactions between the CNS and its external or internal environment. BCI will help in personalizing the user experience by understanding the brain signals from a user.<|endoftext|>Metrics to measure customer experience in BCI: 1. Speed - Speed of the user’s reaction. Higher the speed, more is the user interest on digital print.<|endoftext|>2. Intensity - Intensity of user’s reaction towards a digital presence will help understanding the likes and dislikes of user.<|endoftext|>3. Reaction - This will help understand the different reactions on digital interaction.<|endoftext|>Measuring Tools Open-source tools like OpenEXP, Psychtoolbox, etc. can be leveraged to build custom built utilities for measurement of the above metrics {{ img-description : a person is holding an ipad with location & shopping app in the view, in the style of photobashing, rounded shapes, light emerald and violet, blink-and-you-miss-it detail, precision, david brayne, sharp focus }}
---
Page: 6 / 8
---
External Document © 2022 Infosys Limited With multiple channels to interact with the end customers, companies really looking at ensuring the digital quality assurance in a faster and in a continuous way. To reduce time to market, customer experience assurance should be automated with more and more infusion of AI and ML. Further, quality assurance should be in an end-to-end manner, where the developer can ensure the quality even before the application is passed to QA. With the adoption of DevSecOps, customer experience assurance should be an ogoing process which goes beyond the conventional QA phase Some of the technical challenges in automation are: • Services offered by company should have a seamless experience with all distribution channels (Web, mobile, Doc, etc.).<|endoftext|>• Early assurance during development • Ensure regulatory compliance • Collaboration environment for developers, testers, and auditors with proper governance • On demand service availability • Automating the remediation and Continuous Integration • Actionable insights • Scoring mechanism to benchmark • Integration with Test and Development tools The above challenges will call for a fully automated customer experience platform as depicted below: Automation in Customer Experience Assurance Figure 2 Automation approach for evaluating holistic customer experience An automation approach should be comprehensive enough to provide a collaboration environment between testers, developers, auditors, and the customers. It needs accelerators or external tools to measure and analyze various aspects of customer experience. Cognitive analysis to ensure continuous improvement in customer experience is a key success factor for every enterprise. As shown in the picture, complete automation can never be achieved as some assistive or manual verification is required. For example, JAWS screen reader to test the text to speech output. Also, the platform needs to have the integration capabilities with external tools for end-to- end test automation.<|endoftext|>IDE plugins for shift left remediation Online Experience Audit Services APIs and CI/CD plugins Accessibility Analyzer Sentimental Analytics Visual Consistency checker Google APIs Intelligent application crawler Cognitive analysis Dashboards & Reports Cloud Environments with multi browser & device PCloudy Applitools ALM JiRA Subscription & Administration Scheduler Tool adapters External IPs Accelerators Usability Analyzer User touch points Platform component Accelerators/ tools Others Manual Assistive technologies
---
Page: 7 / 8
---
Conclusion As the digital world is moving towards personalization, QA teams should work on data analytics and focus on analyzing user behavior and activities, leveraging various available testing tools. They should also focus on adapting new and emerging testing areas like AI based testing, Persona based testing, Immersive testing, 2D illustration testing etc. These new testing areas can help in identifying the issues faced in providing the best customer experience, quantify the customer experience and can help in improving it.<|endoftext|>Since there is considerable amount of time, money and effort are put into QA., for ensuring good ROI, QA team should start taking customer experience as a personality-based experience and work upon all major aspects mentioned above. QA teams should look beyond the normal hygiene followed for digital platforms, dig deeper and adapt a customer centric approach in order to make digital prints suitable to the user in all the aspects.<|endoftext|>{{ img-description : hands holding a phone with a star rating on it, in the style of interactive experiences, futurist claims, expert draftsmanship, meticulous attention to detail, precision and detail-oriented }} External Document © 2022 Infosys Limited
---
Page: 8 / 8
---
© 2022 Infosys Limited, Bengaluru, India. All Rights Reserved. |
Continue # Infosys Whitepaper
Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected About the
Author Saji V.S Principal Technology Architect References 1. Customer Experience Validation - Offerings | Infosys 2. https://www.gartner.com/imagesrv/summits/docs/na/customer-360/C360_2011_brochure_FINAL.pdf 3. The Future of CX 2022, a trends report by Freshworks
***
|
# Infosys POV
Title: Reinventing the CSP Product Lifecycle Management for Digital Ecosystems
Author: Infosys Consulting
Format: PDF 1.7
---
Page: 1 / 16
---
An Infosys Consulting Perspective By Sagar Roongta, Kiran Amin and Thiag Karunanithi Consulting@Infosys.com | InfosysConsultingInsights.com REINVENTING THE CSP PRODUCT LIFECYCLE MANAGEMENT FOR DIGITAL ECOSYSTEMS How to sustainably manage product portfolio complexity in the digital age?
---
Page: 2 / 16
---
CONTENTS 2 1.<|endoftext|>Introduction 2.<|endoftext|>Unified PLM Framework 3.<|endoftext|>Components of Unified PLM 4.<|endoftext|>Recommendations for CSPs 5.<|endoftext|>Infosys PLM Maturity Model Reinventing the CSP Product Lifecycle Management for the Digital Ecosystems © 2023 Infosys Consulting
---
Page: 3 / 16
---
Generating new sources of revenue & free cash flows are the top priorities for CSP CEOs in 2023 In our regular interactions with Communication Service Providers (CSPs) across Asia Pacific, increasing revenue growth is a key priority for executive leadership. The traditional revenue sources have diminished while any price increase will prove to be an extremely sensitive subject for price savvy customers. The COVID- 19 pandemic allowed CSPs to emerge unscathed, or at least to re-think how they now commit substantial investments to expand their 5G networks and open additional revenue sources.<|endoftext|>As per Gartner’s 2023 Board of Director’s Survey, 46% of boards wanted to expand new product lines, to create new growth opportunities. In this environment, CSPs have expanded to occupy the role of digital ecosystem gateways, acting as a marketplace operator where consumer and enterprise customers can buy bundled service offering(s) within an enclosed ecosystem and CSP partners. These ecosystem partners vary from digital content providers, gaming fintech, financial services, cybersecurity, insurance, and health-tech companies seeking to access the digital ready customer portfolio of CSPs. Through these mass-personalized offerings, CSPs could increase customer stickiness and reinforce core business goals. For the end- consumer, their CSP becomes not just a connectivity provider but an incumbent one- stop-shop for digital services. Consequently, as per a recent IDC report, one in three CSP is expected to generate more than 15% of their overall revenue from new digital products and services, compared to one in six in 2020. 1,2 INTRODUCTION 3 Did You Know? As per a recent IDC report, one in three CSP is expected to generate more than 15% of their overall revenue from digital products and services, compared to one in six in 2020.<|endoftext|>Reinventing the CSP Product Lifecycle Management for the Digital Ecosystems © 2023 Infosys Consulting How should CSPs manage their product portfolios as they launch plethora of new 5G, digital and innovative offerings?
---
Page: 4 / 16
---
INTRODUCTION 4 As an ecosystem provider, CSPs would be operating in a dramatically different business model. Newer, open, and more complex offerings must be introduced to markets quickly, which requires more investments and collaboration, while the lifecycle of each product must be better controlled from financial and technological perspectives. However, the existing CSP product portfolio is already overly complex, and the product development process is highly bureaucratic as it currently functions on legacy systems, processes, and historical ways of working. Furthermore, mistakes or shortcomings perceived in the products or product designs reach the market because the company cannot react to market changes quickly enough. The slowness of an end-to-end process means that companies are unable to bring its products to market in rhythm with customers’ wishes, market changes, and set timetables or to collect the greatest possible product margin. Subsequently, CSPs are forced to undergo expensive product rationalization exercises to cull out redundant or non-profitable offerings. Hence, to succeed in the dynamic ecosystem era, CSP would need to reimagine how they develop new products whilst innovating and managing their lifecycles. This requires a deep dive into their existing product lifecycle approach from a piecemeal activity to a next generation product lifecycle management approach. How and at what level of each company conducts its product lifecycle management implementation depends on several factors that can be explained through the Unified PLM Framework.<|endoftext|>For sustainable digital ecosystem success, CSPs need to re-imagine their PLM activities as a strategic initiative Did You Know? According to Bain’s Digital GPS Benchmark, more than half of respondents from telecom industry, said that automation of back-office operations like PLM, as their top digital priority3 Reinventing the CSP Product Lifecycle Management for the Digital Ecosystems © 2023 Infosys Consulting
---
Page: 5 / 16
---
The Unified Product Lifecycle Management (Unified PLM) is a comprehensive approach to implementing product lifecycle strategy in a telecom organization. It integrates a comprehensive PLM strategy, a modular product architecture, an efficient PLM process design and enabling data & technology architecture that saves time, reduces product complexity, and excels in a multi-party environment. It creates a framework to capture insights across lifecycle phases to rapidly create new offerings while having an automated process to right-size unused offerings.4 It starts with setting up a PLM strategy and governance structures that defines the stakeholders critical to implement a product architecture relevant for the digital age which need not be complex but simplified and modularized. Subsequently, the enabling processes and technology are implemented to execute the product portfolio and lifecycle strategy. The unified PLM framework has five key components: Unified PLM is a holistic framework for PMs to save time and reduce costs in a multi-party world UNIFIED PLM FRAMEWORK 5 • Strict process stage gates • IT change & configuration management • Process improvement initiatives • Product retirement process • Process support systems • Decision support system • Product data management and policies • Process efficiency tools • PLM maturity assessment • Product portfolio & PLM alignment • PLM governance framework • Incorporation of product variants • Organizational structure • Roles and responsibilities • Skill & Resources allocated • Multi-party collaboration • Modular marketing product structure • Product rules governance • Modular process design • Reusability of components PLM Strategy PLM Process Excellence Data & Technology Product Design Organization & People Unified PLM Framework Reinventing the CSP Product Lifecycle Management for the Digital Ecosystems © 2023 Infosys Consulting
---
Page: 6 / 16
---
The PLM strategy is the foundational rail on which the organization embarks on transforming its PLM activities. It is an optimal alignment between the need for innovation and marketing priorities with a governance mechanism that effectively addresses the PLM priorities. It allows for seamless synchronization of product development, market management, and retirement processes. It also sets the necessary governance and control mechanisms to detect and mitigate potential threats. However, CSPs should be wary of implementing a one-size- fits-all approach to all products in the portfolio; they should treat various products differently based on their operating model, product complexity and lifecycle behavior.<|endoftext|>For example, a device bundled with mobile plans have a limited shelf-life; hence the PLM Strategy should be agile to respond faster to any type of change while enterprise products have a longer shelf-life to allow for market development and sales cycles. Hence, an effective PLM strategy creates a framework for managing distinct product variants within the strategic priorities. Unified PLM begins from strategy to create the rails on which people & products are organized COMPONENTS OF UNIFIED PLM 6 PLM Strategy 1 PLM Governance 2 Alignment to Product Portfolio Strategy 3 PLM Process Variants PLM Strategy Reinventing the CSP Product Lifecycle Management for the Digital Ecosystems © 2023 Infosys Consulting
---
Page: 7 / 16
---
COMPONENTS OF UNIFIED PLM PLM activities cut across different stakeholders such as product marketing, sales, technology, customer management, business intelligence and finance teams. And in an unstructured environment, these activities are conducted on an ad-hoc basis by individual functions. And consequently, conflicts between stakeholders are quite common. Various case studies have demonstrated that more conflicts lead to a higher probability of final product failure. Hence, for a sustainable PLM success, an organizational structure must |
Continue # Infosys POV
be present where all the relevant departments & teams can efficiently collaborate and coordinate.<|endoftext|>CSPs have traditionally implemented a divisional structure or a matrix structure where group managers from distinct functions collaborate to develop new products. However, with the number of parties spreading across multiple organizations, new organizational models must be considered. Implementing new structures include offering management squads or agile product teams to independently execute the PLM strategy within their organizations. These squads are accountable for the entire lifecycle of an offering, including validating the need in the marketplace and conducting the impact analysis on engineering, sales, support, and budgeting. In addition, these squads have the mandate to work across business units and disciplines to harness the company’s entire arsenal of talent and knowledge base. These structures could be customized based on process variants and the strategic necessities of the company. 7 Organization & People 1 PLM Organization 2 Responsibility Assignment 3 Employee Empowerment Organization & People Reinventing the CSP Product Lifecycle Management for the Digital Ecosystems © 2023 Infosys Consulting
---
Page: 8 / 16
---
What is modularization? Modularization is an activity of dividing a product or process into logical and interchangeable modules. The objective is to create a flexible system that enables creation of different configurations, while reducing the need to create unique building blocks each time.6 Benefits of a modular system are numerous: • Higher efficiency as modules can be consolidated across different products • Higher agility as changes & modifications can be isolated to specific modules, enabling others to remain unchanged • Higher flexibility as mass customization on individual module can be achieved at scale The product design component aims to enable product component reusability by defining the constraints and rules for decomposing the product functionality into meaningful modules with a coherent product data model. For CSPs, the product structure includes breaking down the product offer and reusing the individual service modules from a market, technical and operational perspective. A modular market perspective includes cross-linkage between aspects like tariff plans, fees, market segments, etc. A technical perspective includes decomposing the product offering into individual product, service, and resource modules. Finally, an operational perspective includes reusing process modules such as fulfillment, assurance, and billing processes independent of the product type. Important advantages of a modular product design include faster time to market, more efficient development, better innovation ability and lesser operational costs as complex systems become easier to manage; parallel activities can operate independently, reusability of existing components, and faster fault localization. The product model defined as per the SID framework by TM Forum is a classic example of a modular product architecture.5 COMPONENTS OF UNIFIED PLM 8 Product Design 1 Modular Marketing Product Design 2 Product Rules Governance 3 Modular Process Design Product Design LOW HIGH HIGH No. of product variants offered to customers Cost efficiency of product portfolio LOW Customization Standardization Modularization Relative cost efficiency with the increase in product variants in the market Reinventing the CSP Product Lifecycle Management for the Digital Ecosystems © 2023 Infosys Consulting
---
Page: 9 / 16
---
A typical PLM process includes six phases - ideation, design, development, go-to-market, sales, and finally, retirement. Most CSPs have a well-documented product lifecycle management process of end-to-end activities however, the presence of a process excellence framework is rarer or if one does exist, it needs to be more effectively applied.<|endoftext|>PLM process excellence embeds the continuous improvement in the PLM process, aligning with the strategic PLM goals. It includes unambiguously defining relevant activities, their sequence, data requirements and configuration. In addition, it also defines the roles and responsibilities of the product organization throughout the value chain to ensure proper execution. The critical elements within PLM process excellence includes definition of stage gates that ensure that the product satisfies the minimum criteria to move to next lifecycle phase, standardization of various process variants and a regular retirement process.7 A regular rules-based product retirement process is critical in making the product offerings more targeted and manageable and eliminates the need for product rationalization activities every few years. COMPONENTS OF UNIFIED PLM 9 PLM Process Excellence 1 Continuous Process Improvement PLM Process Excellence 2 Strict Process Stage Gates 3 IT Change & Configuration Management 4 Retirement Management Standard PLM Process Reinventing the CSP Product Lifecycle Management for the Digital Ecosystems © 2023 Infosys Consulting
---
Page: 10 / 16
---
The objective of Data & Technology component is to provide frameworks that increase the efficiency of the PLM process execution while making it more efficacious. Informed by the product design and PLM process components, it helps in implementing these components in a business environment. The product data management forms the backbone for managing and controlling the lifecycle of a product. It starts from the market research and business planning data to the subsequent product performance and eventual retirement justification. A well-constructed PDM framework enables all stakeholders to capture, communicate and disseminate all heterogeneous data throughout its lifecycle. It helps in speeding up product development, reduce errors and increase efficiency of resources. The technology component includes two key elements – process support systems and decision support systems. Process support systems include RPA or workflow management system for automated process implementation, decision gate evaluation, process compliance and automated triggers. Decision support systems include machine learning and AI-based tools that improve the decision-making capabilities to identify new product opportunities and proactively retire products that are underperforming. COMPONENTS OF UNIFIED PLM 10 Data & Technology 1 Process Support Systems 2 Decision Support Systems 3 Product Data Management Data & Technology Reinventing the CSP Product Lifecycle Management for the Digital Ecosystems © 2023 Infosys Consulting
---
Page: 11 / 16
---
More than ever, customers are expecting more for less. To manage now and future needs, CSPs must understand their own core capabilities and have the expertise, agility, and flexibility to react to meet demand and ultimately remain relevant. Developing a strategy that balances internal barriers and constraints, whilst integrating organization goals and vision is by no means an easy task. This coupled with external competition has intensified the entire PLM playing field, thus essential for CSPs to understand the ‘real’ opportunity within that if managed and executed well, will lead to create value across the board.<|endoftext|>CSPs across the world have approached their PLM initiatives that vary on a broad spectrum of parameters from ad hoc manual activities to the use of AI & machine learning algorithms to recommend product lifecycle actions. The effectiveness of these initiatives naturally would be determined against internal as well as external factors. However, implementation of any PLM project requires an extensive change in intra and inter-organization processes, new types of skills and capabilities, and more than that, an organization wide cultural and strategic transformation. Hence, any PLM initiative would require strategic commitment and resources. 8 As a first step towards that initiative, a maturity model can help CSPs to assess as-is while giving a guided path to advance their PLM capabilities for the future. Evaluate your as-is PLM capabilities by determining your organization’s PLM maturity RECOMMENDATIONS FOR CSPs 11 Reinventing the CSP Product Lifecycle Management for the Digital Ecosystems © 2023 Infosys Consulting
---
Page: 12 / 16
---
PLM Maturity Model Infosys Consulting has developed the PLM Maturity Model based on industry research, best practices and external trends to assess the relative CSP performance on PLM. It is a scientific method to rate the performance of individual component against the best-in- class industry standards and identify opportunities for improvement. It considers not only existing PLM initiatives but also the relevant market trends, customer readiness and competitive profile to rate the performance of a CSP on their maturity state Infosys’s PLM Maturity Model categorizes CSPs into 5 maturity levels: 1.<|endoftext|>Ad-hoc: Ad-hoc is the preliminary maturity state where there is no evidence of a PLM strategy and vision. This stage is often characterized by inconsistent processes, monolithic product structure and absence of enabling process and technological framework. The lifecycle activities often are executed on a case-by-case basis by individual functions in the organization for a specific need. The PLM maturity model assesses relative performance to recommend next course of action INFOSYS PLM MATURITY MODEL |
Continue # Infosys POV
12 Infosys PLM Maturity Model Reinventing the CSP Product Lifecycle Management for the Digital Ecosystems © 2023 Infosys Consulting
---
Page: 13 / 16
---
INFOSYS PLM MATURITY MODEL 13 2. Structured: In a structured state, a high- level PLM strategy and governance is present which defines PLM objectives and participating stakeholders. This structured state incorporates basic process and governance framework to increase efficiency with a singular focus on reducing time to market. 3.<|endoftext|>Integrated: An integrated maturity state increases the PLM coverage with different product variants and parties included in the PLM activities. Considering disparate product variants, integrated state CSPs are able to modularize their product architecture that is scalable across different product offerings and operating models.<|endoftext|>4. Automated: In an automated maturity state, CSPs incorporate multiple systems and tools to automate their PLM implementation. This includes use of a centralized product data management capability and use of process support systems to track and manage product lifecycle stage. 5. Adaptive: CSPs in the adaptive maturity state, increasingly use AI and Machine Learning algorithms to analyze and predict the performance of the marketed products. AI-based tools help CSPs to analyze customer behavior to recommend product offering ideas and process automation opportunities for better PLM outcomes. Common Pitfalls On their path to achieving PLM transformation, CSPs need to avoid certain common pitfalls: • Lack of executive commitment - PLM cannot be treated as a siloed initiative, its sponsorship must have the right organizational backing.<|endoftext|>• Fragmented governance - where PLM is not defined or limited with no real controls.<|endoftext|>• Limited process standardization across departments, making it more a free for all rather than centralized and cohesive • Lack of empowerment and resources allocated to the PLM organization to implement change • Visibility of product data – to support decision making and product performance • Inability to utilize new emerging technologies – use newer technologies to support PLM process e.g., advanced data analytical tools, AI, ML etc. • Proliferation of products –With the improvement in product go-to-market timelines, avoid creating new superfluous offerings Although daunting, it is not an impossible task to move away from more traditional CSP behaviors, the key to progression and moving forward is to first understand where you currently are and how incrementally you can move in the right direction.<|endoftext|>Hence, it is now very important for CSPs to consider self-maturity assessment. Reinventing the CSP Product Lifecycle Management for the Digital Ecosystems © 2023 Infosys Consulting
---
Page: 14 / 16
---
Take the Maturity Health Check NOW The Infosys consulting maturity health check will allow CSPs to:: • Evaluate PLM health and performance against best-in-class operationalized processes and technology implementations • Quickly identify and pinpoint areas with biggest improvements and business gain • Execute a plan of how to take current as- is and “upgrade” to the next maturity state • Use a scientific approach to objectively assess performance against PLM company wide objectives • Implement a PLM monitoring framework to assess different components on a continual basis INFOSYS PLM MATURITY MODEL 14 SAGAR ROONGTA Consultant Singapore +65 8264 6036 Sagar.Roongta@infosysconsulting.com MEET THE EXPERTS AUTHORS KIRAN AMIN Senior Principal Singapore +65 9742 7657 Kiran.Amin@infosysconsulting.com Reinventing the CSP Product Lifecycle Management for the Digital Ecosystems © 2023 Infosys Consulting THIAG KARUNANITHI Associate Partner Australia +61 4064 2736 0 Thiag.Karunanithi@infosys.com
---
Page: 15 / 16
---
1.<|endoftext|>https://www.gartner.com/en/articles/see-the-key-findings-from-the-gartner-2023-board-of- directors-survey 2. https://www.idc.com/getdoc.jsp?containerId=prAP49619722 3.<|endoftext|>https://www.bain.com/insights/digital-transformation-what-matters-most-in-your-sector- interactive/ 4. https://www.researchgate.net/publication/204100092_Next_Generation_Telco_Product_Lifecy cle_Management_- _How_to_Overcome_Complexity_in_Product_Management_by_Implementing_Best- Practice_PLM 5. https://www.tmforum.org/oda/information-systems/information-framework-sid/ 6. https://www.modularmanagement.com/blog/all-you-need-to-know-about-modularization 7.<|endoftext|>https://www.productfocus.com/product-management-resources/infographics/product- management-lifecycle/ 8. https://link.springer.com/article/10.1007/s00170-013-5529-1 REFERENCES 15 Reinventing the CSP Product Lifecycle Management for the Digital Ecosystems © 2023 Infosys Consulting
---
Page: 16 / 16
---
consulting@Infosys.com InfosysConsultingInsights.com LinkedIn: /company/infosysconsulting Twitter: @infosysconsltng About Infosys Consulting Infosys Consulting is a global management consulting firm helping some of the world’s most recognizable brands transform and innovate. Our consultants are industry experts that lead complex change agendas driven by disruptive technology. With offices in 20 countries and backed by the power of the global Infosys brand, our teams help the C-suite navigate today’s digital landscape to win market share and create shareholder value for lasting competitive advantage. To see our ideas in action, or to join a new type of consulting firm, visit us at www.InfosysConsultingInsights.com. For more information, contact consulting@infosys.com © 2023 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names, and other such intellectual property rights mentioned in this document. Except as expressly permitted, neither this document nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printed, photocopied, recorded or otherwise, without the prior permission of Infosys Limited and/or any named intellectual property rights holders under this document.
***
|
# Infosys Whitepaper
Title: The right approach to testing interoperability of healthcare APIs under FHIR
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
VIEW POINT THE RIGHT APPROACH TO TESTING INTEROPERABILITY OF HEALTHCARE APIs UNDER FHIR
---
Page: 2 / 8
---
External Document © 2020 Infosys Limited External Document © 2020 Infosys Limited Abstract Historically, the lack of mutual data exchange on patient health between entities in the healthcare industry has impaired the quality of patient care. This has led to poor health outcomes and increased costs for patients. The Trump administration’s MyHealthEData initiative has the stated objective of placing the patient at the center of the US healthcare system. The initiative aims to promote interoperability of patient health data between entities using latest technologies such as cloud and APIs.<|endoftext|>This paper outlines the test approach to ensure compliance with the CMS (Centers for Medicare and Medicaid Services) payor policies mandate effective July 2021. The paper explains 3 simple steps for interoperability of cloud/on- premises healthcare APIs between payors and providers as per Fast Healthcare Interoperability Resources (FHIR) guidelines using the Infosys FHIR testing solution. This solution has been built on the proven Infosys Interoperability Test Automation Framework that leverages open-source tools, cloud components, and test servers.<|endoftext|>
---
Page: 3 / 8
---
External Document © 2020 Infosys Limited External Document © 2020 Infosys Limited Introduction All healthcare payors, providers, and stakeholders need to ensure that their systems are truly interoperable based on Fast Healthcare Interoperability Resources (FHIR) guidelines. Organizations are seeking to minimize the manual effort involved in ensuring data integrity and real-world testing as per these guidelines. The need of the hour is test accelerators that can seamlessly integrate with FHIR cloud/ on-premises servers to run automated conformance and data validation tests.<|endoftext|>Fragmented data consolidation Currently, organizations have non-standard data consolidation architecture due to the nature of their source systems. To be CMS (Centers for Medicare and Medicaid Services) certified, organizations must implement FHIR-compliant microservices- based architecture for API enablement including authentication and security policies. This adds additional complexities for testers to validate data accuracy and performance at the FHIR layer (cloud/ on premise) in addition to specification conformance validation.<|endoftext|>Any test strategy to achieve FHIR compliance must be carefully structured to mitigate all known challenges. To begin with, organizations must finalize their API end-to-end operating model, cloud/ on-premises implementation strategy, consent management, and intermediate data aggregation strategy so that we understand the various stages that need FHIR testing. Key challenges Adoption and interpretation of FHIR standards FHIR standards have base rules, constraints, and other metadata that make them complex to interpret and derive conformance scenarios. FHIR regulated payors and providers will be required to implement FHIR compatible cloud/on-premises based healthcare APIs considering different market segments like Medicare, Medicaid, and any other state-specific inclusions. They also need to enable various FHIR resources such as diagnostics, medication, care provision, billing, payments, and coverage to be accurate and accessible as per the standard SLAs for patient and provider.<|endoftext|>Real world testing This is a gray area at this initial stage because of the lack of a SMART application (a user-facing application that connects to payors or providers for a patient’s health records). This limits the process of simulating and validating complete end- to-end testing for FHIR conformance. A payor or provider must qualify themselves as FHIR compliant both internally and externally through acceptance from consumers. To achieve acceptance, a payor needs to conduct multiple tests to cover data integrity, conformance, and consumer registration and consent as per FHIR guidelines. Tests need to be conducted between peer-to-peer servers, client to server, and from a standalone server to a proven FHIR-compatible test framework.<|endoftext|>Infosys 3-step process for interoperability testing Infosys recommends these 3 simple steps to achieve interoperability of cloud/on- premises healthcare APIs between payors and providers as per FHIR guidelines Step 1 – Functional/non- functional testing The primary focus areas of functional testing are: • Conformance (structure and behavior) validation based on FHIR specifications • Data validation of cloud/on premise- based healthcare APIs through the original source or from intermediate data aggregation or virtualization. If an organization considers any data aggregation before FHIR mapping, then additional testing around data quality should be done to ensure source data is aggregated correctly at the FHIR layer Non-functional testing such as performance and security testing are equally important given that APIs are exposed externally using OAuth2.0 or OpenID connect protocols.<|endoftext|>
---
Page: 4 / 8
---
External Document © 2020 Infosys Limited External Document © 2020 Infosys Limited Step 2 – Regulatory compliance testing This validation should ensure payor or provider complies with the guidelines defined by CMS and FHIR. The primary focus of regulatory compliance testing is on features mandated by HealthIT standards which includes self- discoverability, capability statement, authentication/authorization, FHIR conformance and so on. Every payor must undergo this testing so that they can pass the CMS certification within the given deadline.<|endoftext|>
---
Page: 5 / 8
---
External Document © 2020 Infosys Limited Step 3 – End-user beta testing There are several third-party healthcare applications (cloud/on-premises) that are under development by Apple, Amazon, Google, and other payor/vendors that are registered in their respective developer portals. Our strategy involves partnering External Document © 2020 Infosys Limited with these vendors and using their beta applications to integrate with payor FHIR servers. This will provide payors early confidence in terms of usability and conformance with regulation.<|endoftext|>Figure 1 below depicts an end-to-end automated test-driven development flow of Infosys Interoperability Test Automation Framework covering the 3-step testing procedure. The framework leverages Infosys FHIR Testing Solution, open-source tools, cloud components, and test servers to achieve FHIR compliance seamlessly across various healthcare entities within a short time to market.<|endoftext|>Figure 1: Infosys Interoperability Test Automation Framework
---
Page: 6 / 8
---
External Document © 2020 Infosys Limited External Document © 2020 Infosys Limited The road ahead Today, organizations are focused on getting their FHIR-compliant cloud/on- premises based patient access API and provider directory API up and running. However, organizations must also think of other policies specified in the CMS final rules. These include payor to payor data sharing, improving user experience for the beneficiary, and admission/discharge/ transfer event notifications. With such requirements in mind, payors and vendors need to think about developing a scalable test automation framework that not only works between payors and consumers but also across payors. Infosys is enhancing the capabilities of its FHIR testing solution so that it is future ready.<|endoftext|>
---
Page: 7 / 8
---
External Document © 2020 Infosys Limited Conclusion Establishing end-to-end testing for FHIR guidelines compliance across healthcare payors and third-party vendors comes with its own challenges and complexities. The only way forward is based on the final CMS interoperability rule. A solution that ensures fast and effective delivery of FHIR rules to consumers should focus on two key aspects: • Building configurable FHIR accelerators with the ability to support cloud/on- premises FHIR servers • Identifying FHIR-compliant consumers for continuous testing practices The end goal is to build a test approach that can be scaled rapidly based on the rate of increase in FHIR adoption by payors, providers and consumers.<|endoftext|>External Document © 2020 Infosys Limited
---
Page: 8 / 8
---
© 2020 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, |
Continue # Infosys Whitepaper
or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected https://www.cms.gov/Regulations-and-Guidance/Guidance/Interoperability/index https://inferno.healthit.gov/inferno/ https://www.aegis.net/touchstone.html https://projectcrucible.org/ https://www.logicahealth.org/solutions/fhir-sandbox/ https://fhir.cerner.com/smart/ https://apievangelist.com/2019/09/18/creating-a-postman-collection-for-the-fast-healthcare-interoperability-resources-fhir-specification/ https://hapifhir.io/hapi-fhir/docs/validation/introduction.html About the
Author Amit Kumar Nanda, Group Project Manager, Infosys References:
***
|
# Infosys Whitepaper
Title: Test automation framework – how to choose the right one for digital transformation?
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 4
---
VIEW POINT TEST AUTOMATION FRAMEWORK – HOW TO CHOOSE THE RIGHT ONE FOR DIGITAL TRANSFORMATION?
---
Page: 2 / 4
---
What is a test automation framework and what are its different types? A test automation framework is a combin- ation of guidelines, coding standards, concepts, practices, processes, project hierarchies, reporting mechanism, test data, to support automation testing. A tester follows these guidelines while automating applications to take advantage of various productive results.<|endoftext|>Introduction Digitalization and the disruption caused by the adoption of digital technologies are rapidly changing the world. Speed matters a lot in all IT operations, and this calls for a paradigm shift in quality assurance (QA). Quality at high speed is the key focus in digital assurance, and organizations want to deliver quality products much faster than ever before. This is making QA teams to bank on test automation. From the initial automation of regression tests, the industry is moving towards progressive automation and day one automation. At the same time, extreme automation and zero touch automation are the buzz words in the QA world these days. Various advancements have evolved in the area of automation testing. However, it is critical that organizations choose the right automation framework, which is considered a critical factor for its success. In this document, we will explore the different types of automation frameworks, and how to choose the right framework which will help in achieving the digital assurance goals of the organizations.<|endoftext|>There are many types of test automation frameworks available in the market, and the most popular ones are listed here. Each one of these frameworks has their individual characteristics and features.<|endoftext|>Let us now examine some of the popular frameworks and understand their pros, cons, and usability recommendations: Linear Functional Decomposition /Modular Data Driven Keyword Driven Hybrid BDD Automation Framework Types Keyword-driven framework In the keyword-driven framework, testers create various keywords and associate different actions or functions with each of these keywords. Function library contains the logic to read the keywords and call and perform the associated actions. Generally, test scenarios are written in excel sheets. The driver script reads the scenario and performs test execution. This is used in situations where the testers who create test scripts the strengths of the different frameworks and mitigates their weaknesses. It is highly robust, flexible, and more maintainable. However, this requires strong technical expertise to design and maintain.<|endoftext|> Behavior-driven development framework Behavior-driven development (BDD) framework automates validations in an easily readable and understandable format to business analysts, developers, testers, etc. Such frameworks do not necessarily require the user to be acquainted with any programming language. There are different tools available for BDD like Cucumber, JBehave, and more which work along with other test automation tools. This framework is more suitable for applications using agile methodology and where user stories and early automation are required. It focuses on the behavior of the system rather than the implementation aspect of the system. The traceability between requirements and scripts is maintained throughout, and test scripts are easy to understand for the business users.<|endoftext|>Pillars of the right framework for the digital era Automation can improve quality and lead to higher testing efficiency. Hence, it is important to plan it well and make the right choice of tools and frameworks. When test automation uses the right framework based on the context, it yields great benefits. Hence, it is worth understanding the key requirements of the framework, before choosing the right one.<|endoftext|>have less programming expertise, whereas framework creation is done by automation experts Hybrid framework The hybrid automation framework is created by combining distinct features of two or more frameworks. This enhances External Document © 2018 Infosys Limited
---
Page: 3 / 4
---
Some key aspects of automation framework to look for during the digital assurance journey are provided below: Extreme automation Digital transformation programs, big data, cloud, and mobility are changing the way testing is being done. Leaders in testing are moving towards extreme automation to achieve a faster time to market. Extreme automation is the key, and automating every part of the testing process instead of just regression is crucial now. A framework which is more scalable and facilitates lifecycle automation as well as broader test coverage is needed for digital assurance programs.<|endoftext|>Technology and tool agnostic approach The landscape of tools in QA is becoming wider day by day. There are too many tools and frameworks, which poses a lot of integration challenges. Hence, it is imperative to choose a framework which is technology and tool agnostic and supports various tools and technologies. The framework needs to address enterprise-level automation strategy and goals instead of catering to just a single project goals.<|endoftext|>Script less capabilities Automate the automation, and look out for scriptless automation avenues. Most software testers and business users find it challenging to learn programming languages such as Java, Visual Basic, etc. well enough to write the scripts that the test automation demands. There are frameworks and accelerators available with user-friendly graphical user interfaces (GUI) which help to create automation scripts in a much easier way than having to know and write code in any specific programming language. Choosing a framework which helps to create a test script from the recorded script or based on the input from a spreadsheet will help in accelerating automation and reduce dependency on skilled resources.<|endoftext|>True shift left attitude Digitalization and frequent releases call for day one automation. Gone are the days when the automation team would wait until the application is built and start automation activities thereafter. The need of the hour is to shift extreme left and start automation during the requirement gathering phase of the systems development life cycle (SDLC) itself. Automation framework with exhaustive reusable library and support for BDD will help both business users and QA teams to start automation activities early in the life cycle.<|endoftext|>Omnichannel, mobility, and cloud features Organizations today are focusing more on digital assurance, but it is important to test real user behaviors and to test on multiple devices such as various mobiles, tablets, platforms, and such. Hence, the chosen test automation framework needs to facilitate testing on multiple devices to ensure a uniform experience across devices. If the framework supports the reuse of the script used for online or desktop testing for mobile testing as well with minimal rework, it will help in saving much effort. When addressing the multifaceted needs of mobile testing, conducting comprehensive testing across hundreds of different devices, brands, models, and different operating system combinations is tedious. A framework that facilitates integration with cloud infrastructure will be an added advantage.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 4 / 4
---
Zero touch automation As DevOps is slowly taking over the IT landscape, it is vital to reduce the distance between development and deployment. Test scripts need to be executed in an unattended manner without requiring much manual intervention. Remote execution, parallel execution, zero touch execution, and execution from continuous integration tools like Jenkins and Hudson, when supported by automation framework, will help a lot in managing multiple sprints and shorter cycles better. Seamless integration With a plethora of tools being used in application development and testing landscape, it is important that the automation tool and the framework chosen facilitate integration with various tools. Hence, it is imperative that the chosen automation framework and tool integrate with test management tools, defect tracking tools, build tools, analytics tools, and continuous integration tools in the landscape.<|endoftext|>User-friendly reporting Agile and Dev Ops has brought the business, development and QA teams to work together. The ability to run a high volume of tests is of little use if the results of the tests are not easy to understand by various stakeholders involved. The framework has to facilitate automatic generation of reports of the test execution and show the results in an easy-to-read format. Though most of the market tools give few reporting options, they are not self-explanatory and adequate. Hence, the framework with good reporting capabilities such as HTML reports, live execution dashboard, screen shots in case of failures, and video reporting of the execution options will be very helpful. Automation framework facilitating detailed test result reporting reduces the overall effort to a greater extent.<|endoftext|>About the author Indumathi Devi, a project manager with Infosys, has 13+ years of experience in software testing. She has effectively executed a multitude of automation projects and designed and developed automation frameworks. Using her strong working knowledge of multiple test |
Continue # Infosys Whitepaper
automation tools, including open source and commercial ones, Indu has worked with numerous clients in implementing robust test automation solutions.<|endoftext|>Conclusion No one size fits all. This perfectly holds true when it comes to framework selection. Since every project is unique, the challenges, duration, and tools choices may vary. Organizations seeking agility in their business processes need to onboard robust test automation solutions that ensure superior software quality. Successful test automation frameworks for digital assurance are the ones which support extreme automation, omnichannel testing, zero touch execution of test scripts, and have some or all of the key aspects detailed above. We recommend that organizations select an automation framework that can lead to smarter automation, better overall results, productivity benefits, and cost efficiencies in the highly dynamic digital landscape.<|endoftext|>© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: The right testing strategy for AI systems - An Infosys viewpoint
Author: Infosys Limited
Format: PDF 1.6
---
Page: 1 / 8
---
PERSPECTIVE THE RIGHT TESTING STRATEGY FOR AI SYSTEMS AN INFOSYS VIEWPOINT VENKATESH IYENGAR, AVP - Group Practice Engagement Manager, Infosys SUNDARESA SUBRAMANIAN G, Practice Engagement Manager, Infosys
---
Page: 2 / 8
---
External Document © 2018 Infosys Limited Abstract Over the years, organizations have invested significantly in optimizing their testing processes to ensure continuous releases of high-quality software. When it comes to artificial intelligence, however, testing is more challenging owing to the complexity of AI. Thus, organizations need a different approach to test their AI frameworks and systems to ensure that these meet the desired goals. This paper examines some key failure points in AI frameworks. It also outlines how these failures can be avoided using four main use cases that are critical to ensuring a well-functioning AI system.<|endoftext|>The hierarchy of evolution of AI Introduction Experts in nearly every field are in a race to discover how to replicate brain functions – wholly or partially. In fact, by 2025, the value of the artificial intelligence (AI) market will surpass US $100 billion1. For corporate organizations, investments in AI are made with the goal of amplifying the human potential, improving efficiency and optimizing processes. However, it is important to be aware that AI too is prone to error owing to its complexity. Let us first understand what makes AI systems different from traditional software systems: S.No Software systems AI systems 1 Features – Software is deterministic, i.e., it is pre-pro- grammed to provide a specific output based on a given set of inputs Features – Artificial intelligence/machine learning (AI/ML) is non-deterministic, i.e., the algorithm can behave differently for different runs 2 Accuracy – Accurate software depends on the skill of the programmer and is deemed successful if it produces an output in accordance with its design Accuracy – Accuracy of AI algorithms depends on the training set and data inputs 3 Programming – All software functions are designed based on if-then and for loops to convert input data to output data Programming – Different input and output combinations are fed to the machine based on which it learns and defines the function 4 Errors – When software encounters an error, remediation depends on human intelligence or a coded exit function Errors – AI systems have self-healing capabilities whereby they resume operations after handling exceptions/errors Fig 1: Hierarchy of AI’s evolution and techniques Fig 1: Hierarchy of AI’s evolution and techniques Visualization Machine learning and analytics Input data conditioning Learning process – data sources Feedback Feedback – From sensors, devices, apps, and systems Visualization – Custom apps, connected devices, web, and bots Machine learning and analytics – Cognitive learning/algorithms Input data conditioning – Big data stores and data lakes Data sources – Dynamic or static sources like text, image, speech, sensor, video, and touch
---
Page: 3 / 8
---
External Document © 2018 Infosys Limited External Document © 2018 Infosys Limited The above figure shows the sequential stages of AI algorithms. While each stage is necessary for successful AI programs, there are some typical failure points that exist within each stage. These must be carefully identified using the right testing technique as shown in the table below: S.No Evolution stage in AI Typical failure points How can they be detected in testing 1 Data sources – Dynamic or static sources • Issues of correctness, completeness and appropriateness of source data quality and formatting • Variety and velocity of dynamic data resulting in errors • Heterogeneous data sources • Automated data quality checks • Ability to handle heterogeneous data during comparison • Data transformation testing • Sampling and aggregate strategies 2 Input data condition- ing – Big data stores and data lakes • Incorrect data load rules and data duplicates • Data nodes partition failure • Truncated data and data drops • Data ingestion testing • Knowledge of development model and codes • Understanding data needed for testing • Ability to subset and create test data sets 3 ML and analytics – Cognitive learning/ algorithms • Determining how data is split for training and testing • Out-of-sample errors like new behavior in previously unseen data sets • Failure to understand data relationships between enti- ties and tables • Algorithm testing • System testing • Regression testing 4 Visualization – Cus- tom apps, connected devices, web, and bots • Incorrectly coded rules in custom applications resulting in data issues •
Formatting and data reconciliation issues between reports and the back-end • Communication failure in middleware systems/APIs resulting in disconnected data communication and visualization • API testing • End-to-end functional testing and automa- tion • Testing of analytical models • Reconciliation with development models 5 Feedback – From sen- sors, devices, apps, and systems • Incorrectly coded rules in custom applications resulting in data issues • Propagation of false positives at the feedback stage resulting in incorrect predictions • Optical character recognition (OCR) testing • Speech, image and natural language pro- cessing (NLP) testing • RPA testing • Chatbot testing frameworks The right testing strategy for AI systems Given the fact that there are several failure points, the test strategy for any AI system must be carefully structured to mitigate risk of failure. To begin with, organizations must first understand the various stages in an AI framework as shown in Fig 1. With this understanding, they will be able to define a comprehensive test strategy with specific testing techniques across the entire framework. Here are four key AI use cases that must be tested to ensure proper AI system functioning: • Testing standalone cognitive features such as natural language processing (NLP), speech recognition, image recognition, and optical character recognition (OCR) • Testing AI platforms such as IBM Watson, Infosys NIA, Azure Machine Learning Studio, Microsoft Oxford, and Google DeepMind • Testing ML-based analytical models • Testing AI-powered solutions such as virtual assistants and robotic process automation (RPA)
---
Page: 4 / 8
---
External Document © 2018 Infosys Limited Use case 1: Testing standalone cognitive features Natural language processing (NLP) • Test for ‘precision’ return of the keyword, i.e., a fraction of relevant instances among the total retrieved instances of NLP • Test for ‘recall’, i.e., a fraction of retrieved instances over total number of retrieved instances available • Test for true positives (TPs), true negatives (TNs), false positives (FPs), and false negatives (FNs). Ensure that FPs and FNs are within the defined error/fallout range Speech recognition inputs • Conduct basic testing of the speech recognition software to see if the system recognizes speech inputs • Test for pattern recognition to determine if the system can identify when a unique phrase is repeated several times in a known accent and whether it can identify the same phrase when it is repeated in a different accent • Test deep learning, the ability to differentiate between ‘New York’ and ‘Newark’ • Test how speech translates to response. For example, a query of “Find me a place I can drink coffee” should not generate a response with coffee shops and driving directions. Instead, it should point to a public place or park where one can enjoy his/her coffee Image recognition • Test the image recognition algorithm through basic forms and features • Test supervised learning by distorting or blurring the image to determine the extent of recognition by the algorithm • Test pattern recognition by replacing cartoons with the real image like showing a real dog instead of a cartoon dog • Test deep learning using scenarios to see if the system can find a portion of an object in a larger image canvas and complete a specific action Optical character recognition • Test OCR and optical word recognition (OWR) basics by using character or word inputs for the system to recognize • Test supervised learning to see if the system can recognize characters or words from printed, written or cursive scripts • Test deep learning, i.e., whether the system can recognize characters or words from skewed, speckled or binarized (when color is converted to grayscale) documents • Test constrained outputs by introducing a new word in a document that already has a defined lexicon with permitted words Use case 2: Testing AI platforms Testing any platform that hosts an AI |
Continue # Infosys Whitepaper
framework is complex. Typically, it follows many of the steps used during functional testing. Data source and conditioning testing • Verify the quality of data from various systems – data correctness, completeness and appropriateness along with format checks, data lineage checks and pattern analysis • Verify transformation rules and logic applied on raw data to get the desired output format. The testing methodology/automation framework should function irrespective of the nature of data – tables, flat files or big data • Verify that the output queries or programs provide the intended data output • Test for positive and negative scenarios Algorithm testing • Split input data for learning and for the algorithm • If the algorithm uses ambiguous datasets, i.e., the output for a single input is not known, the software should be tested by feeding a set of inputs and checking if the output is related. Such relationships must be soundly established to ensure that algorithms do not have defects • Check the cumulative accuracy of hits (TPs and TNs) over misses (FPs and FNs) API integration • Verify input request and response from each application programming interface (API) • Verify request response pairs • Test communication between components – input and response returned as well as response format and correctness • Conduct integration testing of API and algorithms and verify reconciliation/visualization of output System/regression testing • Conduct end-to-end implementation testing for specific use cases, i.e., provide an input, verify data ingestion and quality, test the algorithms, verify communication through the API layer, and reconcile the final output on the data visualization platform with expected output • Check for system security, i.e., static and dynamic security testing • Conduct user interface and regression testing of the systems
---
Page: 5 / 8
---
External Document © 2018 Infosys Limited Use case 3: Testing ML-based analytical models Organizations build analytical models for three main purposes as shown in Fig 2 Fig 2: Types and purposes of analytical models The validation strategy used while testing the analytical model involves the following three steps: • Split the historical data into ‘test’ and ‘train’ datasets • Train and test the model based on generated datasets • Report the accuracy of model for the various generated scenarios Fig 2: Types and purposes of analytical models External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
External Document © 2018 Infosys Limited Fig 3: Testing analytical models Organizations build analytical models for three main purposes as shown in Fig 2 Fig 2: Types and purposes of analytical models Use case 4: Testing of AI-powered solutions Chatbot testing framework • Test the chatbot framework using semantically equivalent sentences and create an automated library for this purpose • Maintain configurations of basic and advanced semantically equivalent sentences with formal and informal tones and complex words • Automate end-to-end scenario (requesting chatbot, getting a response and validating the response action with accepted output) • Generate automated scripts in Python for execution RPA testing framework • Use open source automation or functional testing tools (Selenium, Sikuli, Robot Class, AutoIT) for mul- tiple applications • Use flexible test scripts with the ability to switch between machine language programming (where required as an input to the robot) and high-level language for functional automation • Use a combination of pattern, text, voice, image, and optical character recognition testing techniques with functional automation for true end-to-end testing of applications While testing a model, it is critical to do the following to ensure success: • Devise the right strategy to split and subset historical dataset using deep knowledge of development model and code to understand how it works on data • Model the end-to-end evaluation strategy to train and recreate model in test environments with associated components • Customize test automation to optimize testing throughput and predictability by leveraging customized solutions to split the dataset, evaluate the model and enable reporting
---
Page: 7 / 8
---
External Document © 2018 Infosys Limited Conclusion AI frameworks typically follow 5 stages – learning from various data sources, input data conditioning, machine learning and analytics, visualization, and feedback. Each stage has specific failure points that can be identified using several techniques. Thus, when testing the AI systems, QA departments must clearly define the test strategy by considering the various challenges and failure points across all stages. Some of the important testing use cases to be considered are testing standalone cognitive features, AI platforms, ML-based analytical models, and AI- powered solutions. Such a comprehensive testing strategy will help organizations streamline their AI frameworks and minimize failures, thereby improving output quality and accuracy. External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: RPA validation pitfalls and how to avoid them
Author: Infosys Limited
Format: PDF 1.6
---
Page: 1 / 8
---
PERSPECTIVE RPA VALIDATION PITFALLS AND HOW TO AVOID THEM MANOJ AGGARWAL, Delivery Manager, Infosys
---
Page: 2 / 8
---
External Document © 2018 Infosys Limited Abstract While many organizations are adopting robotic process automation (RPA) to increase operational efficiency, the approach for testing and validating robots is still a software-first strategy. The lack of the right testing and validation strategy can result in under-performing robots that are unable to meet the desired business and efficiency outcomes. This paper outlines erroneous RPA testing strategies and provides an effective approach to RPA validation.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
External Document © 2018 Infosys Limited Introduction To execute any business operation or workflow in large enterprises, services personnel or business teams usually work with multiple IT applications. Robotic process automation (RPA) can automate these business workflows across multiple and disparate applications using software robots that mimic the actions of human users as depicted below: The increased adoption and criticality of RPA for business process automation makes it essential to design and build RPA robots that exceed the level of productivity and quality delivered by human users. However, many RPA robots tend to underperform due to poor design and improper implementation in addition to workload scheduling decisions. Further, QA teams struggle to identify these issues during robot validation primarily due to recurring test strategy and test execution challenges as described below: What to test How to test Where to test Focus on validating IT application functionality: Since business process automation is always implemented on ap- plications used by the business, it is fair to assume that these applications are tested and work properly in production. However, QA teams continue to invest significant effort to validate application behavior like querying the database to verify data updates, checking error messages in case of negative scenarios, etc. Missing non-functional requirements driven by the operating environment: Sometimes, validation teams miss out on capturing and validating operating envi- ronment requirements like business SLAs, process execution window and dependent activities.<|endoftext|>Validation of robots as software: One of the most striking challenge is that validation teams treat robotic process testing as software validation. Their approach focuses on test steps automation, validation of the Robot in- puts with database and re-verification of steps performed by Robots. While some of these validation steps are relevant, teams tend to miss the key validations required for seamless and predictable functioning of RPA robots.<|endoftext|>Automating RPA testing: There are many instances where efforts are made to auto- mate RPA testing using test automation tools without much clarity on what needs to be automated when RPA itself works in an auto- mated manner.<|endoftext|>Validation in incorrect environments: Many QA teams begin validating RPA robots in SIT and other lower environments where the application baseline is usually not in sync with the production environ- ment. This asynchrony results in robots fail- ing to perform in the production environ- ment during the first few runs.<|endoftext|>
---
Page: 4 / 8
---
External Document © 2018 Infosys Limited A successful example of robot validation A good example for RPA validation can be gleaned from the manufacturing industry that has successfully deployed physical robots over many years with a very high level of quality and predictability. On comparing how physical robots are tested with how RPA robots should be tested, the following key testing attributes emerge: 1. The robot’s ability to perform as per the instructions 2. The robot’s ability to perform tasks autonomously 3. The robot’s ability to perform at higher efficiency 4. The robot’s ability to handle exceptions gracefully A recommended approach for RPA validation Combining the knowledge of software validation with the processes followed by the manufacturing industry for robot validation provides a fitting approach for RPA validation that overcomes the testing challenges mentioned above: What to test How to test Where to test Validation scenarios: Since robots are designed to mimic business workflows, com- panies can re-use/create test cases similar to UAT test cases for RPA validation. The focus should be on validating business flows and business exceptions rather than application functionality Non-functional requirements validation in the operating environment: Teams should capture non-functional requirements like the ability to operate autonomously, SLA delivery efficiency and expected volume to be processed within preset timeframes Functional validation of robots: Validation teams should set up the input case data for all business cases within the determined scope and then begin robot processing. They should also validate the outcomes and re-runs in cases of errors/unexpected outcomes Exception scenarios: Teams should set up the inputs case data with exception scenarios and validate the robot’s ability to identify and report these exceptions for human intervention. They should also re-run and validate these tests in case of errors/un- expected outcomes UAT or higher environments: QA teams should perform validation of RPA only in UAT or higher environments to prevent issues arising from incorrect application versions or environment set-up Stabilization in production: Teams should train robots with simple cases and at low volume during initial runs and gradually increase complexity and volume to ensure minimal impact in case of any processing mismatch
---
Page: 5 / 8
---
External Document © 2018 Infosys Limited External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
External Document © 2018 Infosys Limited Skill recommendations RPA validation teams should have a deep understanding of business processes across regular and exception workflows to ensure that robots are tested for all possible business scenarios. Additionally, the teams should be well aware about the scheduling mechanism and controller team operating model of the RPA robots so that they can validate the operational efficiencies of RPA robots before deployment into production. Each RPA validation team should also leverage a shared team of experts who have in-depth knowledge of object design and error handling mechanisms. This is important to maintain focus on the performance tuning of robots to meet the operational efficiency requirements.<|endoftext|>As the RPA ecosystem continues to evolve, one can expect more complex robot implementations with optical character recognition (OCR) and natural language processing (NLP) inputs as well as AI integrations. Moreover, robots will increasingly compete for shared resources like server time and application access to complete the business processes. In such a scenario, RPA validation teams should continue acquiring in-depth knowledge on evolving validation needs and improve the strategies to support these complex business requirements. Exit criteria Before a robot is deployed into production, it is important to check that it has been validated to exceed the business benchmarks and is predictable. Here are some key parameters that should be considered as the exit criteria for robots before they are deployed into live production: • The adherence to business SLAs should be higher than that of manual processing SLAs • The number of business processing exceptions should be equal to or less than the number of manual processing exceptions • The total number of cases processed without human intervention should be equal to or greater than 95% of the in- scope cases • Robot availability should exceed 98% Typically, robots meeting these criteria are well-placed to take over business processing responsibilities from human users and are proven to deliver the desired business benefits.<|endoftext|>
---
Page: 7 / 8
---
External Document © 2018 Infosys Limited Conclusion RPA has the potential to transform how organizations execute business workflows by enabling higher efficiency, faster outcomes and almost error-free operations. One of the key success drivers is the right testing strategy for RPA validation. Validating robots using existing software testing models is ineffective because it often results in non-performing or under- performing assets. To overcome this challenge, organizations need a strategy that tests the robot’s capacity to work autonomously, handle exceptions well, operate at higher efficiency, and follow preset instructions. With the right validation approach, skills and exit criteria, RPA can help organizations meet the desired business outcomes of error-free, predictable and efficient business operations.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this |
Continue # Infosys Whitepaper
document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Scaling continuous testing across the enterprise
Author: Infosys Limited
Format: PDF 1.6
---
Page: 1 / 8
---
WHITE PAPER SCALING CONTINUOUS TESTING ACROSS THE ENTERPRISE Abstract Over the years, organizations have invested significantly in optimizing their testing processes to ensure continuous release of high-quality software. Today, this has become even more important owing to digital transformation. This paper examines some of the critical features of setting up a robust continuous testing practice across the enterprise. It considers the technology and process standpoints and provides guidelines to ensure a successful implementation from inception to delivery of high-quality software.<|endoftext|>
---
Page: 2 / 8
---
External Document © 2020 Infosys Limited Establishing a continuous testing (CT) practice is an organization-level change that cannot be executed with a ‘big bang’ approach. It should be conceptualized, implemented and allowed to mature at a program level. Once continuous testing gains inroads into various programs across the enterprise, it can then become the new norm for enterprise software delivery. To enable a seamless journey towards organization-wide continuous testing, the gradual change must be sponsored and supported by senior IT leadership. To begin with, enterprises must set up a program-level center of excellence (CoE) and staff it with people who have a deep understanding of automation. This team should be responsible for: • Identifying the right set of tools for CT • Building the right solutions and practices that can be adopted by the program • Integrating automated tests with the DevOps pipeline for continuous testing These action points lay the foundation for establishing CT at a program level. They can subsequently be improved and aligned depending on the enterprise’s needs. Introduction External Document © 2020 Infosys Limited
---
Page: 3 / 8
---
External Document © 2020 Infosys Limited The journey Once the CoE has been set up, enterprises must focus on expanding the practice of continuous testing within all program-level agile teams across the organization. This can be done by promoting the existing program-level CoE into an enterprise-level CoE with broader responsibilities. The primary goal of the enterprise-level CoE should be: • To ensure CT adoption by all scrum teams • To establish real-time reporting, metrics and measurement for faster adoption • To identify areas with zero/low automation and provide support Enterprises can also accelerate CT adoption by extensively showcasing the benefits realized from programs.<|endoftext|>Fig 1: Continuous testing journey of an enterprise External Document © 2020 Infosys Limited Transform from Program COE – Org COE Branding at Org Level –solutions, metrics Matured CT at Program level Help Program Scrums to adopt CT Establish Program Level CT COE Challenges during enterprise adoption Extending continuous testing operations across the enterprise can be daunting. Enterprises should be prepared to face resistance to change arising from higher operational costs and limited technical knowledge. Some of the key challenges of CT adoption and the ways to address them are listed below: Challenges during enterprise adoption The way forward Multiple automation solutions – Teams often devise their own ways to implement frameworks, leading to redundant code, confusion and wastage During the initial assessment phase, enterprises can invest in developing a solution that is capable of porting with existing enterprise code Low technical knowledge for automation – QA organizations often don’t have the necessary skills to build and maintain automation scripts Focus on talent enablement through brown-bag meetings, training workshops and support. Enterprises can also use script-less and model-based automation tools Pushback from certain lines of business (LOBs) – Most enterprises use disparate technology stacks and execution methodologies. It is more challenging to adopt CT in some domains on legacy such as mainframe systems, PowerBuilder environments and batch jobs Build a robust automation framework that supports different technology stacks for an improved usability experience. Define different goals for different technology stacks (like lower automation goals for legacy and higher automation targets for APIs) to ensure parity across teams during adoption Limited funding – Many customers view continuous testing as an expense rather than an investment since it includes automation development as well as integration costs Enable individual QA teams to adopt in-sprint automation, thereby transferring the high initial cost to individual projects. This reduces the overall enterprise CT cost to the expense of integrating automation with the DevOps pipeline
---
Page: 4 / 8
---
External Document © 2020 Infosys Limited Federated Operating Model Tool Standardization Enterprise Solutions & Process Automation Enterprise Continuous Testing CT Practices & Processes & Enterprise adoption Assess and Identify the Best Tool sets Build The right Solution for Automation Training, Docs & Videos, Org Level CT metrics Centralized COE, with Distributed Automation teams Infosys approach on establishing CT 1. Establish a federated operating model Based on our past experience across several engagements, Infosys has identified four main dimensions that are essential to establishing and operating a successful continuous testing practice across any enterprise. These dimensions are described below.<|endoftext|>A robust operating model is vital for any practice. Earlier models comprising functional testing teams and shared automation teams failed because these hamper functional teams from achieving in-sprint/progressive automation within 2-3-week sprint windows. Thus, the way forward is to embrace automation as a culture and consider it as a business-as- usual process for anything that a tester does.<|endoftext|>Care must be taken to ensure that generic and complex solutions are built and maintained by a centralized CT CoE team. In their daily tasks, regular QA teams must adapt to artifacts developed by the CoE team. Through this federated method of operations, enterprises can simply deploy a few people with specialized skills to create CT artifacts, thereby supporting regular testers with relative ease.<|endoftext|>Fig 2: Key dimensions for establishing a successful CT practice Fig 3: Structure, roles and interactions in a federated operating model with centralized CoE • Liaise with CI-CD to integrate CT with pipeline • Develop generic automation artifacts – Frameworks / processes / solutions • Provide support, handholding • Metrics & reporting • Build automation tests for supported apps • Leverage artifacts created by CT COE • Request for enhancements and provide suggestions • Request for help and support for CT adoption Enterprise QA teams Org Level CT COE Team External Document © 2020 Infosys Limited
---
Page: 5 / 8
---
External Document © 2020 Infosys Limited 2. Standardize tools 3. Build enterprise and process automation solutions The process of standardizing tools should be objective. It should first assess the existing enterprise tools, identify those that can be reused and procure new tools for CT. Here are some important considerations when evaluating any tool: • Tool fitment – Ensure the tools are the right ones for the task they have to service • Tool license and operational cost – This should fall within the client’s budget • Tool scalability – It should be capable of scaling to meet future organizational needs • Tool integration – In a DevOps world, tools should be easily compatible with other tools and pipelines • Tool support – There should be adequate online and professional support services for the tool Currently, there are many tools available in the market that support the various stages of the software development lifecycle (SDLC). Enterprises must be careful to onboard the right set of tools that will aid software development and QA.<|endoftext|>The automation solutions built to aid and enable CT are at the heart of any robust continuous testing practice. These solutions should encompass every phase of the software testing lifecycle. Some critical automation artifacts that must be developed to sustain a robust CT practice are: a) Automation frameworks – This is the basic building block that aids automation and enables CT across an enterprise, as shown below in Fig 4.<|endoftext|>b) Distributed and parallel test execution approaches – Speedy test execution is critical to accelerate software delivery. As the volume of automated tests increases, enterprises should adopt distributed and parallel execution either by onboarding off-the-shelf tools or by building custom solutions as per enterprise requirements c) Test data automation – Testers spend a significant amount of time setting up test data. Solutions should be built to automate the manufacturing, cloning/masking, mining, and management of test data from requisition to provisioning d) Process and environment automation – This involves automating all test and defect management-related processes. Environment provisioning automation is essential to make the entire CT practice cost effective and manageable at an enterprise scale and some viable options are cloud-based or infrastructure virtualization solutions.<|endoftext|>It is important to note that automation solutions are simply a critical |
Continue # Infosys Whitepaper
subset of all the solutions that should be developed as a part of establishing a successful CT practice. Care must be taken to prioritize developing these solutions and weighing their benefits according to the needs of the enterprise.<|endoftext|>Fig 4: Characteristics of a robust automation framework Controllable – Centrally managed Scalable – Ready for Future Agnostic – Agility for Change Multi Tech. Support – Enterprise coverage Reusable – Write once run many times Portable – Run anywhere Open Integration – With other tools Ease of Use – Script less to the extent possible
---
Page: 6 / 8
---
External Document © 2020 Infosys Limited 4. Establish enterprise CT processes and drive adoption Successful products are the sum total of their technologies, processes and best practices, which make products attractive and easy to adopt. Having explored the technical solutions and artifacts for CT, let us examine some of the critical CT processes and strategies for enterprise adoption.<|endoftext|>a) CT processes – Robust processes act as a pointer for all who embark on the CT journey. The key processes to be established are: • Day-wise activity tasks for helping teams adopt in-sprint automation, as shown in Fig 5 • CT metrics, measurement and reporting like overall automation coverage, in-sprint automation percentage, percentage of defects found from automation, execution time per run, code quality, and code coverage b) Adoption strategies – Implementing CT across the enterprise can be facilitated through the following ways: • Providing organization-level branding like communications, mailers and workshops as well as talent enablement and support through brown-bag meetings, demos, trainings, self-learning videos, playbooks, etc. • Accelerating adoption through centralized CT metrics reporting that introduces healthy competition, faster adoption and early identification of problem areas We, at Infosys, believe these dimensions can help enterprises set up and operate a mature continuous testing practice. Nevertheless, it is important to customize these according to the needs of each organization. • Baselined requirements • Script development 1-3 • Get artifacts from dev such as html fles, wireframes for scripting • Daily code deployment for incremental changes • Unit testing of scripts • QA code deployment for changes • Automated script execution • Address code fxes for defects • Rerun tests and signof 4 5-7 8 9-10 Fig 5: Daily tasks during a 10-day sprint External Document © 2020 Infosys Limited
---
Page: 7 / 8
---
External Document © 2020 Infosys Limited Conclusion Establishing a continuous testing practice across the enterprise comes with its own challenges and complexities. These include insufficient knowledge of automation, limited funding, resistance to change, and disparate technologies. When developing a strategy for enterprise-wide adoption of CT, Infosys recommends paying attention to four critical dimensions. These include creating a federated operating model, standardizing tools, building automation solutions, and establishing CT processes. Through our approach, enterprises can benefit from a roadmap to successfully implement CT with a dedicated center of excellence. The successful adoption of continuous testing mandates changing how people work and embracing automation as part of the organizational culture and business as usual.<|endoftext|>External Document © 2020 Infosys Limited
---
Page: 8 / 8
---
© 2020 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected Mohanish Mishra, Senior Project Manager – Infosys Limited About the
Author
***
|
# Infosys Whitepaper
Title: IoT Security Assurance
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
VIEW POINT Abstract IoT is one of the rapidly growing emerging technologies. It is entering in all walks of our life with connected cars, connected smart homes, connected healthcare, wearables, etc. and exchanging large amount of sensitive/PII information over internet. This increases the security risk of IoT devices. In this document we are proposing an approach for security assessment of various IoT components.<|endoftext|>IOT SECURITY ASSURANCE
---
Page: 2 / 8
---
Introduction New testing processes and tools are developed continuously to improve the quality of software. Today, the IT industry is gaining momentum in agile deliveries and development and operations (DevOps), which is creating new pos- sibilities by integrating development, test, and operations teams. To keep pace with these changes, testing processes and tools need transformation such that testing platforms can be accessible to all stakeholders and made simple. External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
User Interface Application Data Collector Device IoT Applications IoT Systems Architecture It’s evident from the above figure that IoT devices have multi-dimensional usage and are omnipresent. Well, this wide variety of IoT applica- tions poses some interesting use cases with respect to security of data. Suppose you’re using a mobile app for unlocking and controlling peripherals like air conditioning, music system etc., of a vehicle. If someone is able to intercept into the communication channel, say Wi-Fi, used between app and peripheral receiver. The attacker would be able to pose threat to the owner as well as the vehicle.<|endoftext|>I. Device - Sensors, actuators for monitoring and notifying the events.<|endoftext|>II. Collector – systems to collect and pre-process the data sent by sensors III. Data – repository of data accumulated after pre-processing, it can be local or cloud as well IV. Application – processes the data according to the required usage V. User Interface – Web or mobile app interface which provides relevant information to the user VI. Communication Channel – wired or wireless communication link between two layers, it may be entirely absent for a locally connected layer, say when collector and data layers reside on same system.<|endoftext|>Figure 2: IoT Architecture Communication Channel IoT Applications Ranking Figure 1. Ranking of the IoT applications External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
Components Attack Surface Devices (Sensors, Gateways) Device memory, firmware, physical interfaces like USB ports, web interfaces, admin interfaces, Update Mechanism Communication Channel Device Network traffic using LAN, Wireless (Wi-Fi, ZigBee, Bluetooth) Cloud Interface Getting access to sensitive data/PII stored on cloud by Injection attacks, weak passwords or default credentials, Insecure Transport encryption.<|endoftext|>Application Interface (Web and mobile) Getting access to sensitive data or PII by exploiting vulnerabilities like OWASP web and mobile Top 10, in application interfaces.<|endoftext|>Attack Surfaces The Internet of Things infrastructure can be divided mainly into four components, 1. Devices (Gateways, Sensors, Actuators) 2. Communication Channel (Wi-Fi, Bluetooth) 3. Cloud Interface 4. Application Interface (mobile and/or web) Latest attacks The insecure implementation of IoT devices are routinely being hacked and even used as accessories in cyber-attacks.<|endoftext|>1. Hotel Room Locks prone to Hacking Device: - Onity United Technologies are leading supplier of electronic locking system. A Mozilla developer Cody found vul- nerabilities in the locks in 2012. The device created by Cody reads the lock’s memory and gets the cryptographic key information. It sends that information to the door lock which allows the hacker to gain access to the room. 2. Massive DDoS attacks zombies 25,513 CCTV cameras: - Researchers from Sucuri have claimed that CCTV cameras can and are being used for DoS attacks. The attackers use these cameras as botnets. The attack may last for days and could surge to a several thousand HTTP requests per second. Following pie chart depicts the distribution of CCTV Botnets.<|endoftext|>Table – IoT components are their attack surfaces Figure 3. CCTV DDoS Botnet Geographic Distribution (Data Courtesy: https://blog.sucuri.net/2016/06/large-cctv-botnet- leveraged-ddos-attacks.html) 25% 24% 12% 9% 8% 6% 5% 5% 2% 2% 2% Rest of the World Taiwan USA Indonesia Mexico Malasia Israel Italy Vietnam France Spain External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
Infosys security assessment approach for IoT Following practices should be followed while designing an IoT system, to ensure security: - If any other protocols like ZigBee is used, then following methods can be implemented to mitigate any possible security issues: 1. Implement AES Encryption. It provides confidentiality as well as integrity.<|endoftext|>2. Implement Master Keys to secure Key Establishment Procedure.<|endoftext|>3. Implement Link Keys to encrypt the information sent across nodes.<|endoftext|>4. Implement Network Keys to authen- ticate and validate each device which attempts to join the network.<|endoftext|>If MQTT protocol is being used, then follow following methods.<|endoftext|>1. A firewall with sophisticated ruleset should be implemented for every connection to a MQTT broker.<|endoftext|>2. Block all the UDP packets as MQTT uses TCP.<|endoftext|>3. All the ICMP packets should not be blocked as response to PING and TRACEROUTE will be hampered. Instead, investigation of ICMP packets is good approach.<|endoftext|>4. Traffic to any ports which are not needed for the MQTT system should be blocked. Following are MQTT ports • 1883: This is the default port for MQTT over TCP. • 8883: This is the default port for MQTT over TLS. • Use MQTT over TLS for all communica- tions. • Use updated software for all the component as they will be fixed for older security vulnerabilities.] • Apart from above guidelines imple- mentation of Demilitarized Zones and Load Balancers will further strengthen the security of the system. 3. Nissan Leaf electric cars hack vulnerabil- ity disclosed: - 2016 has proved out to be a bad year for IoT security after all. In early 2016 Troy Hunt, and Australian web security expert demonstrated how Nissan Leaf’s companion app can be used to hack the vehicle. Even the app was optional, as a simple web request via a browser was able to control vehicle AC, Heating system and may also reveal the owner’s identity, if its VIN was known. This was proved by the researcher while sitting on a computer in Australia, controlling systems of a vehicle that belonged to an acquaintance of his in United Kingdom! Though, the criticality of the issue may not be life threatening but could still be used to drain car battery for least to say, which in turn can leave the owner in perilous situations.<|endoftext|> Thus, the security incidents discussed are examples of how lack of security measures in IoT devices can not only compromise personal security but also can be leveraged to impact security of other internet services as well.<|endoftext|>Figure 4. Extent of attack on Nissan Leaf External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
Recommended security testing approach for each layer of IOT stack is as follows Sensor : Hardware Security threats Security assessment test case L2: Insufficient Authentication/
Authorization L3: Insecure Network Services L5: Privacy Concerns L8: Insufficient Security Configurability L10:Poor Physical Security Check for Password Complexity and Password Recovery mechanism for device Check for poorly Protected Credentials and inefficient two Factor Authentication Check for Role Based Access Control Check for open ports, check if there are any unnecessary ports utilized.<|endoftext|>Check if getting access to device memory to get sensitive/personal data stored, encryption keys, certificates is allowed Check if Firmware extraction and modification is possible Check for User or Admin Command Line Interface issues Privilege escalation Check if device can be reset to insecure state or default state Check if device or internal memory can be accessed via USB ports and SD cards Sensor : Software Security threats Security assessment test case L3: Insecure Network Services L9:Insecure Firmware/ Software |
Continue # Infosys Whitepaper
Radio communication analysis between device and the gateway by attacking ZigBee, zWave, 6LoWPAN Attacking Bluetooth Low Energy (BLE) Checking if device has direct connection - connecting to mobile app which is in same network Check for firmware Update Functionality Check if firmware Contains Sensitive Information Check if firmware update functionality is using encryption and secure communication and update file encrypted Check for insecure or misconfigured services like FTP, Telnet, TFTP, Finger, SMB, e.g. misconfigured NAT-PMP services, hard-coded Telnet logins Gateway (Raspberry Pi: USB) Security threats Security assessment test case L2: Insufficient Authentication/
Authorization L3: Insecure Network Services L5: Privacy Concerns L8: Insufficient Security Configurability L10:Poor Physical Security Check for Password Complexity and Password Recovery mechanism for device Check for poorly Protected Credentials and inefficient two Factor Authentication Check for Role Based Access Control Check for open ports, check if there are any unnecessary ports utilized.<|endoftext|>Check if getting access to device memory to get sensitive/personal data stored, encryption keys, certificates is allowed Check if Firmware extraction and modification is possible Check for User or Admin Command Line Interface issues Privilege escalation Check if device can be reset to insecure state or default state Check if device or internal memory can be accessed via USB ports and SD cards External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
Message broker Security threats Security assessment test case L4: Lack Of Transport Encryption Insecure Data Storage Check for sensitive data stored on Kafka as it does not support encryption of data at rest Check if configuration files can be accessed and modified Cloud Interface Security threats Security assessment test case I6: Insecure Cloud Interface OWASP Cloud Top 10 vulnerabilities Arbitrary Code Execution Check for Insufficient authentication, lack of transport encryption and account enumeration to access data or controls via the cloud website.<|endoftext|>Check if firewall is configured (if the master of the cluster is exposed to the internet without having any firewall in between then anyone with access to the master URI can submit jobs to the cluster remotely) Check for arbitrary code execution Check for OWASP Cloud Top 10 risks R1: Accountability & Data Risk R2: User Identity Federation R3: Regulatory Compliance R4: Business Continuity & Resiliency R5: User Privacy & Secondary Usage of Data R6: Service & Data Integration R7: Multi-tenancy & Physical Security R8: Incidence Analysis & Forensics R9: Infrastructure Security R10: Non-production Environment Exposure Application-Web Interface Security threats Security assessment test case I1: Insecure Web Interface OWASP Web Top 10 vulnerabilities Check for OWASP web Top 10 issues A1:Injection A2:Broken Authentication and Session Management A3:Cross-Site Scripting (XSS) A4:Insecure Direct Object References A5:Security Misconfiguration A6:Sensitive Data Exposure A7:Missing function level control A8:CSRF A9:Using vulnerabilities from known unknown components A10Unvalidated Redirects and Forwards External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
Conclusion References: IoT is no doubt a fascinating, yet emerging technology. The prioritization of rapid development over security by the developers has caused rise of new IoT vulnerabilities. A large number of IoT devices have already become victims of hacks, botnets and other attacks constantly. Proper security frameworks for IoT, like.NET, Java, Android and iOS, should be made available. IoT solution developers must have security know how. IoT application development should follow secure development life cycle, security tests have to be mandatory. In addition to these, advance approaches like machine learning can also be applied to ensure the IoT security.<|endoftext|>1. http://www.forbes.com/sites/thomasbrewster/2014/11/07/car-safety-tool-could-have-given-hackers-control-of-your-vehicle/#2b98d1ef21b0 2. http://www.computerworld.com/article/2487425/cybercrime-hacking/target-breach-happened-because-of-a-basic-network-segmentation-error.html 3. http://www.ey.com/Publication/vwLUAssets/EY-cybersecurity-and-the-internet-of-things/$FILE/EY-cybersecurity-and-the-internet-of-things.pdf 4. http://h30499.www3.hp.com/t5/Fortify-Application-Security/HP-Study-Reveals-70-Percent-of-Internet-of-Things-Devices/ba-p/6556284#.VHMpw4uUfVc 5. http://www.ey.com/Publication/vwLUAssets/EY-cybersecurity-and-the-internet-of-things/$FILE/EY-cybersecurity-and-the-internet-of-things.pdf 6. http://www2.deloitte.com/content/dam/Deloitte/global/Documents/Technology-Media-Telecommunications/gx-tmt-Iotecosystem.pdf 7. http://internetofthingsagenda.techtarget.com/info/getstarted/Internet-of-Things-IoT-Security-Threats 8. http://internetofthingswiki.com/iot-trends-in-2016/300/ 9. http://internetofthingswiki.com/iot-trends-in-2016/300/ 10.https://iotsecuritywiki.com/ About the authors • Amitesh Gaurav is the Systems Engineer working with Infosys Center for Emerging Technology Solutions group. He works as Security analyst. His focus areas are Web, Mobile and IOT Application security assurance.<|endoftext|>• Jayaprakash Govindaraj is the Senior Technology Architect, and leads Security CoE at Infosys Center for Emerging Technology Solutions group. His focus areas are Web, Mobile and IOT Applications Security Assurance, Secure Development and Managed Security Services.<|endoftext|>Application-Web Interface Security threats Security assessment test case I7: Insecure Mobile Interface OWASP Mobile Top 10 vulnerabilities Check for OWASP Mobile Top 10 issues M1: Weak Server Side Controls M2: Insecure Data Storage M3: Insufficient Transport Layer Protection M4: Unintended Data Leakage M5: Poor
Authorization and Authentication M6: Broken Cryptography M7: Client Side Injection M8: Security Decisions Via Untrusted Inputs M9: Improper Session Handling M10: Lack of Binary Protections © 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Deployment of Public-Key Infrastructure in Wireless Data Networks
Author: Infosys Technologies
Format: PDF 1.4
---
Page: 1 / 10
---
P A Parametric Approach for Security Testing of Internet Applications Arun K. Singh E-Commerce Research Lab. Anand J. Iyer, Infosys Software Labs. for Test Execution and Research. Venkat Seshadri Product Competency Center Infosys Technologies Limited Electronic City, Bangalore, India {arunks, Anandj_iyer, venkatar}@infy.com Abstract. Security is one of the prime concerns for all the Internet applications. Often it is noticed that during testing of the application, security doesn’t get due focus and what- ever security testing is done, it is mainly limited to security functionality testing as cap- tured in the requirements document. Any mistake at requirement gathering stage can leave the application vulnerable to potential attacks. This paper discusses the issues and challenges in security requirement gathering and explains how it is different from normal functionality requirement gathering. It presents an approach to handle different factors that affect application security testing. 1 Introduction As compared to traditional client-server or mainframe-based applications, Internet applications have different security requirements [1]. For any Internet application, there is a need to provide an unre- stricted global access to the user community. This also brings in the threat of outside attack on the system. The hackers may try to gain unauthorized access to these systems for a variety of reasons and disturb the normal working of the application. © QAI India Ltd, 3rd Annual International Software Testing Conference, 2001. No use for profit permitted.<|endoftext|>
---
Page: 2 / 10
---
Presented at the 3rd International Software Testing Conference, India The security of Internet applications necessarily means prevention of two things: destruction of in- formation and unauthorized availability of information. The broad spectrum of application security can be expressed as “PA3IN” viz. Privacy, Authentication, (
Authorization, Audit Control,) Integrity and Non-repudiation. The first issue in application security is to provide data privacy to the users. This means protecting information confidentiality from the prying eyes of unauthorized internal users and external hackers. Before allowing the access permissions to the users, it is essential to ensure the legitimacy. The proc- ess of identifying users is called authentication. Establishing the users’ identity is only half the battle. The other half is access control/ authorization which means attaching information to various data objects denoting who can and who cannot access the object and in what manner (read, write, delete, change access control permissions, and so forth). The third ‘A’ is for audit control which means maintaining a tamper proof record of all security related events. And then when the data is in transit across the network, we need to protect it against malicious modification and maintain its integrity. The non-repudiation means the user should be unable to deny the ownership of the transactions made by them. It is implemented using Digital Signature [3]. This paper presents an approach for testing security of Internet applications. 2 Issues in Testing Security of Internet Applications The two main objectives of an application security testing are to: 1. Verify and validate that the security requirements for the application are met. 2. Identify the security vulnerabilities of the application under the given environment For security testing of an application, the first step is to capture requirements related to each of the security issue. This could not only be a tricky and painstaking exercise because the security require- ments are seldom known very clearly at the time of project initiation. The traditional Use case ap- proaches have proven quite useful in general requirements engineering, both for eliciting require- ments and getting a better overview of the stated requirements [4]. However security requirements © QAI India Ltd, 3rd Annual International Software Testing Conference, 2001. No use for profit permitted.<|endoftext|>
---
Page: 3 / 10
---
Presented at the essentially need to concentrate on what should not happen in the system and this cannot be captured by traditional Use case approach. Also, the security model of Internet applications has undergone many changes. The traditional model for securing an application from outside elements mainly relied on access control. This model was based on creating a hard perimeter wall around the system and providing a single access gate that can be opened only for authenticated users. This security model has worked well with most of the simple Internet based applications. The gateway here refers to a firewall that classifies all users as “trusted” or “untrusted”. In this simplistic model of security, every “trusted” user who is allowed to cross the gate, gains access to every portion of one’s business and no further security checks are done. As the requirements for security increases, the applications need to implement a fine-grained security. In- stead of allowing the trusted user to access every portion of the business, the modern security models divide the business domain into many regions and ensure different levels of security for each region. This means creating a security perimeter for each region. The first level of security check can not be very rigorous as one would want to let in prospective customers, vendors and service providers as quickly as possible. Most of the Internet applications today have multiple security regions with dif- ferent levels of security and these regions could be nested or overlapping with other regions within same single application. While developing an approach for testing security of these kinds of Internet applications, a proper strategy planning is very much essential. It is better to plan security testing at requirement gathering stage itself because there are many issues, which cannot be captured later on by usual methods of testing [2]. One such example is testing for denial of service attack. The issues discussed above make security testing of Internet a very challenging task. In the next sec- tion, a parametric approach to security testing is presented and each step is explained with the help of a sample example. © QAI India Ltd, 3rd Annual International Software Testing Conference, 2001. No use for profit permitted.<|endoftext|>
---
Page: 4 / 10
---
Presented at the 3rd International Software Testing Conference, India The Parametric Approach for Security Testing As discussed in previous section, many of the security parameters cannot be captured and tested us- ing traditional approach. In the proposed parametric approach for security testing, before we start the requirement gathering, a template to enlist all security parameters are created. This task can be achieved in four steps as described below: 1. Create an exhaustive list of all security issues in the application 2. Identify all possible sub parameter for each of these issues 3. List all the testing activities for each sub-parameter 4. Assign weightages corresponding to the level of security and priority. Once this template is ready, it can be used to streamline each stage of testing lifecycle. The lifecycle process of security testing is similar to the software development lifecycle process. In the proposed approach, the security testing lifecycle stages are as follows: Test bed Implementation Test Design and Analysis Capture security test requirements Test Reporting Figure 3.1: Security Testing life cycle a) Capture security test requirements b) Analyze and design security test scenarios c) Test bed implementation d) Interpreting test reports In the following sub-section each of these stages are explained. © QAI India Ltd, 3rd Annual International Software Testing Conference, 2001. No use for profit permitted.<|endoftext|>
---
Page: 5 / 10
---
Presented at the 3rd International Software Testing Conference, India 3.1 Parametric Approach to Capture Security Test Requirements To define the scope of security testing, check the stated requirement against the parametric template. This will help in identifying all the missing elements or the gaps in security requirement capture. The next step would be to allocate appropriate weightages to the various security sub parameters. These weightages would typically be determined by the nature of business domain that the application ca- ters to and the heuristic data available with the security test experts. For example, consider an e-shopping application for an online store. The key security requirement for such an application would be to ensure that the user who provides his own payment information is indeed, who he claims to be and to make sure that the information provided is correct and is not visi- ble to anyone else. From that perspective, authentication, confidentiality and non-repudiation are the key parameters for consideration. One possible set of weightages for these parameters could be 0.5, 0.25 and 0.25 respectively (on the scale of 0 to 1). This is in line with the impact; a security breach could have on the business. The advantage of assigning appropriate weightages is that it helps to optimize the test scenarios for |
Continue # Infosys Whitepaper
each of these parameters. On the next page, table 3.1 presents a sample parametric table. The snap- shot in table 3.1 only comprises of those parameters that are relevant for testing the application requirement. The main security parameters, as listed in the table 3.1 are - Authentication,
Authoriza- tion, Access control, Non-Repudiation and Audit control. This list is just a sample list and in actual scenario there can be many more parameters in the template. These parameters are further classified according to testing subtask. For example the requirement of authentication can be implemented using userid/password, smart card, biometric and digital certifi- cates. And each one of this implementation has different testing requirement. Based on the testing complexity, appropriate weightages are assigned. The weightages can be determined based on past experience and the series of interactions conducted with the customer during the requirement gather- © QAI India Ltd, 3rd Annual International Software Testing Conference, 2001. No use for profit permitted.<|endoftext|>
---
Page: 6 / 10
---
Presented at the 3rd International Software Testing Conference, India ing stage. Going back to the example, since biometric requires substantial investment from the cli- ent’s side the digital certificates score higher than the other modes of implementing authentication. The last column holds the priority level assigned to each sub-parameter. The business domain and the experience of the test expert determine the priority level assigned to each sub-parameter. For exam- ple, consider a case of Internet banking application. Here the authentication is slightly of higher prior- ity than that of authorization or access control. Where as in case of a b2b application the authoriza- tion and the access control takes higher priority. All the parametric values are recorded and this constitutes a requirement document outlining the ar- eas and the level of testing that has to be executed. No. Parameters Dimension (scale of 1 - 5) Dimension (scale 1 - 5) Dimension (scale 1 - 5) Dimension (scale 1 - 5) Dimension (scale 1 - 5) Priority (scale of 1 – 100) 1 Authentication -corroboration that an entity is who it claims to be. No authentica- tion (0) Password Based (3) Smart card and PIN (3) Biometric (4) Digital certificates. (5) 10 2
Authorization No
Authoriza- tion (0) Role Based (3) User Based (3) Using Elec- tronic Signa- tures (for non- repudiation) (3) Role and User Based (4) 10 3 Access Control No ACL (0) Context Based (3) User Based (3) Role Based (3) All 3 (4) 7 4 Non repudia- tion Nothing (0) Digital signature (3) Encryption and digital signature (4) 5 5 Audit control Nothing (0) Use Fire wall/ VPN logs (1) Custom made logs (3) Employ managed security services (4) 5 Table 3.1 © QAI India Ltd, 3rd Annual International Software Testing Conference, 2001. No use for profit permitted.<|endoftext|>
---
Page: 7 / 10
---
Presented at the 3rd International Software Testing Conference, India 3.2 Parametric Approach to Security Test Analysis and Design Once the weightages for each security parameter is finalized, the next task is to create test scenarios for testing the security requirements. The weightages assigned to each sub-parameter goes on to sug- gest the number of scenarios that need to be tested for a particular requirement and the complexity involved. The parametric approach also helps in identification of the extent of data that would be needed for test execution and also the distribution of data based on various scenarios. Also, this will help in identifying the kind of tools that will be needed and determine the appropriateness of a tool for a given type of test. Now the data that is recorded with the parametric template can be tabulated in the below given form: No. Parameter Weightages (Scale of 1- 5) Priority (Scale: 1 – 100) 1 Authentication 3 10 2
Authorization 2.75 10 3 Access Control 2.6 7 4 Non-Repudiation 2.3 5 5 Audit control 2.0 5 Table 3.2 The table 3.2 above is comprised of the security parameter with its corresponding values of Weight- ages and Priority levels. These weightages are the average of weightages assigned for the testing ac- tivity of each sub-parameter as illustrated in the table 3.1. With the availability of these values for each sub-parameter the test expert is now able to plan the test scenarios to the desired level of priority and depth. 3.3 Security Test bed Implementation In case of application level testing the approach of testing the security vulnerabilities should be more logical. Here logical security deals with the use of computer processing and/or communication capa- bilities to improperly access information. Second, access control can be divided by type of perpetra- © QAI India Ltd, 3rd Annual International Software Testing Conference, 2001. No use for profit permitted.<|endoftext|>
---
Page: 8 / 10
---
P tor, such as employee, consultant, cleaning or service personnel, as well as categories of employees. The type of test to be conducted will vary upon the condition being tested and can include: Determination that the resources being protected are identified, and access is defined for each re- source. Access can be defined for a program or individual. Evaluation as to whether the designed security procedures have been properly implemented and func- tion in accordance with the specifications. Unauthorized access can be attempted in on-line systems to ensure that the system can identify and prevent access by unauthorized sources. Most of the security test scenarios are oriented to test the system in the abnormal conditions. This throws up the challenge of simulating the real life conditions that the test scenarios require. There are various tools that can be used to test various vulnerabilities that the system may have. The choice of such a tool is determined by the extent of vulnerability, the tool is able to test, Customization of the tool should be possible and testing of the system using both the external as well as the internal views. The local users are the internal view and the external users constitute the external views. There are various programming tools that can be programmed to simulate the various access control breaches that can happen. 4 Interpreting Security Test Reports Once tests have been executed and security gaps have been identified, the gaps need to be analyzed in order to suggest improvements. The reports need to be validated as many a times as possible. The reports may be misleading and the actual loopholes might be elsewhere or might not exist at all. The security test reports are like X-ray reports, where a security expert is essential to decipher the vulner- abilities reported and bring out the actual problem report. There are various security reporting tools. © QAI India Ltd, 3rd Annual International Software Testing Conference, 2001. No use for profit permitted.<|endoftext|>
---
Page: 9 / 10
---
Presented at the 3rd International Software Testing Conference, India While selecting a security-reporting tool, the following parameters have to be taken in to considera- tion: 1. Extent if vulnerability, the tool is able to report. 2. Customization of the tool should be possible. 3. Testing of the system using both the external as well as the internal views. Once the analysis of the report is completed a list of vulnerabilities and the set of security features that are working is generated. The list of vulnerabilities is classified. The classification level of vulnerability is determined by the amount risk involved in the security breach and the cost of fixing the defect. Once the classification is done the vulnerability is fixed and retested. 5 Conclusions A parametric approach to security testing not only, works very well for security requirement capture, effort estimate and task planning but also for testing different levels of security requirements by dif- ferent components within same application. This approach for requirements gathering comes out to be real handy in capturing those security requirements, which cannot |
Continue # Infosys Whitepaper
be captured by traditional means. This approach enables one to adopt the parametric matrix at each and every stage of security testing, and improve the values in the matrix after each execution of a project. Due to this capability of incorporating experiential data, the parametric approach can provide the most effective mechanism for security testing of Internet applications. 6 Disclaimer The authors of this report gratefully acknowledge Infosys for their encouragement in the development of this research. The information contained in this document represents the views of the author(s) and the company is not liable to any party for any direct/indirect consequential damages. © QAI India Ltd, 3rd Annual International Software Testing Conference, 2001. No use for profit permitted.<|endoftext|>
---
Page: 10 / 10
---
Presented at the 3rd International Software Testing Conference, India References 1. Singh, A.K.: E-Commerce Security. Proc. of National Seminar on E-Commerce & Web En- abled Applications, Harcourt Butler Technological Institute, Kanpur, India (Mar., 2000). 2. Singh, A.K., Subraya B.M. : Modelling Security at Requirements stage in E-Commerce Pro- jects, Proc. of Information Security Solutions Europe Conference (ISSE 2000), Barcelona, Spain (Sept., 2000). 3. Microsoft Product Security – http://www.microsoft.com/ntserver/info/securitysummary.htm 4. Eliciting Security Requirements by Misuse cases -- Guttorm Sindre, Norwegian Univ. of Sci. and Tech and Andreas L. Opdahl, Univ. of Bergen, Norway © QAI India Ltd, 3rd Annual International Software Testing Conference, 2001. No use for profit permitted.<|endoftext|>
***
|
# Infosys Whitepaper
Title: Achieving Order through Chaos Engineering: A Smarter Way to Build System Resilience
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
WHITE PAPER ACHIEVING ORDER THROUGH CHAOS ENGINEERING: A SMARTER WAY TO BUILD SYSTEM RESILIENCE {{ img-description : a green plant growing through dirt and rocks, in the style of explosive pigmentation, 500–1000 ce, rasquache, softly organic, organic, malick sidibé, marguerite blasingame -- }} Abstract Digital infrastructure has grown increasingly complex owing to distributed cloud architectures and microservices. More than ever before, it is increasingly challenging for organizations to predict potential failures and system vulnerabilities. This is a critical capability needed to avoid expensive outages and reputational damage. This paper examines how chaos engineering helps organizations boost their digital immunity. As a leading quality engineering approach, chaos engineering provides a systematic, analytics-based, test-first, and well- executed path to ensuring system reliability and resilience in today’s disruptive digital era.
---
Page: 2 / 8
---
Introduction Digital systems have become increasingly complex and interdependent, leading to greater vulnerabilities across distributed networks. There have been several instances where a sudden increase in online traffic or unforeseen cyberattacks have caused service failures, adversely impacting organizational reputation, brand integrity, and customer confidence. Such outages have a costly domino effect, resulting in revenue losses or, in some cases, regulatory action against the organization. Thus, enterprises must implement robust and resilient quality engineering solutions that safeguard them from potential threats and help overcome these challenges. This is where ‘chaos engineering’ comes in.<|endoftext|>Chaos engineering is a preventive measure that tests failure scenarios before they have a chance to grow and cause downtime in live environments. It identifies and fixes issues immediately by recognizing system weaknesses and how systems behave during an injected failure. Through chaos engineering, organizations can establish mitigation steps to safeguard end users from negative impact and build confidence in the system capacity to withstand highly variable and destructive conditions.<|endoftext|>Chaos Engineering – A Boost to Digital Immunity System resilience is about how promptly a system can recover from disruption. Chaos engineering is an experimentative process that deliberately disrupts the system to identify weak spots, anticipate failures, predict user experience, and rectify the architecture. It helps engineering teams redesign and restore the organization’s infrastructure and make it more resilient in the face of any crisis. Thus, it builds confidence in system resiliency by running failure experiments to generate random and unpredictable behaviour. Despite its name, chaos engineering is far from chaotic. It is a systematic, data-driven technique of conducting experiments that use chaotic behaviour to stress systems, identify flaws, and demonstrate resilience. System complexity and rising consumer expectations are two of the biggest forces behind chaos engineering. As systems becoming increasingly feature-rich, changes in system performance affect system predictability and service outcomes, which in turn, impact business success.<|endoftext|>{{ img-description : a group of hands putting pieces of wooden gears together, in the style of bold structural designs, meticulous detailing (pin background) }}
---
Page: 3 / 8
---
How Chaos Engineering is Different from Traditional Testing Practices • Performance testing – It baselines application performance under a defined load in favorable environmental conditions. The main objective is to check how the system performs when the application is up and running without any severe functional defects in an environment comparable to the production environment. The potential disruptors uncovered during the performance tests are due to certain load conditions on the application. • Disaster recovery testing – This process ensures that an organization can restore its data and applications to continue operations even after critical IT failure or complete service disruption.<|endoftext|>• Chaos testing – During the chaos test, the application under normal load is subjected to known failures outside the prescribed boundaries with minimum blast radius to check if the system behaves as expected. Any deviation from expectations is noted as an observation and mitigation steps are prepared to rectify the deviation.<|endoftext|>Quality assurance engineers find chaos testing to be more effective than performance and disaster recovery testing in unearthing latent bugs and identifying unanticipated system weaknesses.<|endoftext|>{{ img-description : a group of red jet planes flying through the sky, in the style of keos masons, expert draftsmanship, photo taken with provia, performance, wimmelbilder, grid, anglocore (pin background) }}
---
Page: 4 / 8
---
5-step Chaos Engineering Framework Much like a controlled injection, implementing chaos engineering calls for a systematic approach. The five-step framework described below, when ‘injected’ into an organization, can handle defects and fight system vulnerabilities. Chaos engineering gives organizations a safety net by introducing failures in the pre-production environment, thereby promoting organizational learning, increasing reliability, and improving understanding of complex system dependencies. 1. Prepare the process Understand the end-to-end application architecture. Inform stakeholders and get their approval to implement chaos engineering. Finalize the hypothesis based on system understanding.<|endoftext|>2. Set up tools Set up and enable chaos test tools on servers to run chaos experiments. Enable system monitoring and alerting tools. Use performance test tools to generate a steady load on the system under attack. Additionally, a Jenkins CI/CD pipeline can be set up to automate chaos tests.<|endoftext|>3. Run chaos tests Orchestrate different kinds of attacks on the system to cause failures. Ensure proper alerts are generated for the failures and sent to the right teams to take relevant actions.<|endoftext|>4. Analyze the results Analyze the test results and compare these with the expectations set when designing the hypothesis. Communicate the findings to the relevant stakeholders to make system improvements.<|endoftext|>5. Run regression tests Repeat the tests once the issues are fixed and increase the blast radius to uncover further failures.<|endoftext|>This step-by-step approach executes an attack plan within the test environment and applies the lessons/feedback from the outcomes, thereby improving the quality of production systems and delivering tangible value to enterprises.<|endoftext|>{{ img-description : three workers with laptops looking at their company's finances, in the style of sabattier filter, frédéric fiebig, photo taken with provia, back button focus, aquirax uno, handheld, interactive }}
---
Page: 5 / 8
---
Examples of Chaos Engineering Experiments A chaos engineering experiment or a chaos engineering attack is the process of inducing attacks on a system under an expected load. An attack involves injecting failures into a system in a simple, safe, and secure way. There are various types of attacks that can be run against infrastructure. This includes anything that impacts system resources, delays or drops network traffic, shuts down hosts, and more. A typical web application architecture can have four types of attacks run on it to assess application behavior: • Resource attacks – Resource attacks reveal how an application service degrades when starved of resources like CPU, memory, I/O, or disk space • State attacks – State attacks introduce chaos into the infrastructure to check whether the application service fails or whether it handles it and how • Network attacks – Network attacks demonstrate the impact of lost or delayed traffic on the application. It is done to test how services behave when they are unable to reach any one of the dependencies, whether internal or external • Application attacks – Application attacks introduce sudden user traffic on the application or on a particular function. It is done to test how services behave when there is sudden rise in the user traffic due to high demand.<|endoftext|>{{ img-description:two men standing together at a computer in a factory, in the style of lively tableaus, innovating techniques, precise craftsmanship, spot metering, heidelberg school, dynamic brushwork vibrations, curved mirrors }}
---
Page: 6 / 8
---
GameDay Concept {{ img-description : people standing around table while looking at notes, in the style of complex layering, blueprint }} GameDay is an advanced concept of chaos engineering. It is organized by the chaos test team to practice chaos experiments, test incident response process, validate past outages, and find unknown issues in services. In GameDay, a mock war room is set up and the calendar of all stakeholders is blocked for up to 2-4 hours. One or more chaos experiments are run on the system or service to observe the Figure 2 – GameDay simulation approach impact. All technical outcomes are discussed.<|endoftext|>The team includes a ‘General’ who is responsible for conducting the GameDay, a ‘Commander’ who coordinates with all the participants, ‘Observers’ who monitor the GameDay tests and validate the deviations (if any), and a ‘S |
Continue # Infosys Whitepaper
cribe’ who notes down the key observations.<|endoftext|>
---
Page: 7 / 8
---
Benefits of Chaos Engineering To ignore chaos engineering is to embrace crisis engineering. Proactive QE teams have made chaos engineering a part of their regular operations by exposing their staff to chaos tests and collaboratively experimenting with other business units to refine testing and improve enterprise systems.<|endoftext|>Chaos engineering delivers several benefits such as: • Reduced detection time – Early identification of issues caused due to failures occurring in live environments, making it easy to proactively identify which component may cause issues • Knowing the path to recovery – Chaos engineering helps predict system behavior in case of failure events and thus works towards protecting the system to avoid major outages • Being prepared for the unexpected – It helps chart mitigation steps by experimenting with known system failures in a controlled environment • Highly-available systems – Enables setting alerts and automating mitigation actions when known failures occur in a live environment, thereby reducing system downtime • Improved customer satisfaction – Helps avoid service disruptions by detecting and preventing component outages, thereby enhancing user experience, increasing customer retention, and improving customer acquisition Chaos engineering brings about cultural changes and maturity in the way an enterprise designs and develops its applications. However, its success calls for strong commitment from all levels across the organization.<|endoftext|>{{ img-description : different tree varieties on a blue background, in the style of vray tracing, calming symmetry, multilayered dimensions, eco-kinetic }}
---
Page: 8 / 8
---
© 2023 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected Conclusion System failures can prove very costly for enterprises, making it critical for organizations to focus on quality engineering practices. Chaos engineering is one such practice that boosts resilience, flexibility, and velocity while ensuring the smooth functioning of distributed enterprise systems. It allows organizations to introduce attacks that identify system weaknesses so they can rectify issues proactively. By identifying and fixing failure points early in the lifecycle, organizations can be prepared for the unexpected, recover faster from disruptions, increase efficiency, and reduce cost. Ultimately, it culminates in better business outcomes and customer experience.<|endoftext|>About the
Authors Harleen Bedi Senior Industry Principal Jack Hinduja Lead Consultant Harleen is a Senior IT Consultant with Infosys. She focuses on developing and promoting IT offerings for quality engineering based on emerging technologies such as AI, cloud, big data, etc. Harleen builds, articulates, and deploys QE strategies and innovations for enterprises, helping clients meet their business objectives.<|endoftext|>Jack Hinduja is a Lead Consultant at Infosys with over 15 years of experience in the telecom and banking sectors. He has led quality assurance and validation projects for enterprises across the globe. Jack is responsible for driving transformation in digital quality assurance and implementing performance and chaos engineering practices in various enterprises.
***
|
# Infosys Whitepaper
Title: Improve Software Quality and Accelerate Delivery by Adopting Service Virtualization Wisely
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 4
---
VIEW POINT IMPROVE SOFTWARE QUALITY AND ACCELERATE DELIVERY BY ADOPTING SERVICE VIRTUALIZATION WISELY -Manoj Aggarwal, Delivery Manager
---
Page: 2 / 4
---
Context As business applications moved from tightly coupled solutions to service-based distributed architecture, it opened a new range of business and technical possibilities. Services-based architecture enables IT solution providers to expose their business solutions to consumers in a loosely coupled on-call consumer / provider model. Within the organization landscape, it provides enterprise application teams a mechanism to deploy reusable business functions as services for internal consumption.<|endoftext|>Though services have improved the software delivery process, there are some key lessons to be learnt. Business applications depend on SOA based services for critical business functions. This makes developers and testers realize that services behavior knowledge and service availability are the key factors that will define the success and quality of the solutions developed by leveraging these services. To address these dependencies, developers started ‘mocking and stubbing’ of the service for development and unit testing. Testing teams depended heavily on the proactive services environment blocking to ensure availability of services and data for testing. Both these approaches lead to increased effort and timelines for development and testing. However, with software development moving to shorter cycles in Agile and DevOps world and services becoming increasingly complex, these service dependencies became a key obstacle in faster delivery, especially in situations involving parallel development.<|endoftext|>Drawing from the hardware virtualization experience, virtualization of services evolved into an industry standard practice that allowed both developers and testers with a realistic approach to handle services dependencies. Developers managed to integrate code with the virtualized services for ‘live like’ code integration in development environment, enabling these to test and capture potential code issues early. Application testers benefited from the reduced environment and test data dependency result in faster execution and effective leveraging of the automated testing scripts.<|endoftext|>When To Go for Service Virtualization As awareness of the service virtualization and its associated benefits percolated to IT teams, both providers and consumers of services started to deploy virtualization solutions at a brisk pace. A fair number of implementations delivered significant business benefits ranging from 20-30% infra cost reduction, 10-15% reduction in delivery timelines, and 15-20% reduction in defect slippage. A significant majority of virtualization solutions continued to struggle with justification of investments in the development and maintenance of these solutions.<|endoftext|>At Infosys, we believe a holistic analysis of the service consumption pattern such as a service user base, data requirement, and cost of consumption are some parameters that help decide the best fit for service virtualization solution deployment. One more factor that will increasingly define virtualization deployment is the software development methodology followed by IT teams.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 3 / 4
---
Service virtualization works best for service providers in a number of instances. This includes the following: • Services are yet to be developed and consumers need services for their development / testing to meet rollout timelines • Services consumer base is very high and availability of test data / environment is a challenge • Maintaining live services environment is expensive • Services behavior is fairly stable and test data / scenarios can be covered with the limited data set Service virtualization works best for service consumers in a few instances. This includes the following: • Services provider cannot provide services for the development / testing team • Provider does not provide a live / virtualized service economically • Availability of services and data in test environment is a challenge • Service that is frequently needed by development teams due to the volatile nature of the consuming application It is not recommended for both consumer and provider in the following instances: Services are readily available and inexpensive to maintain Services behavior changes very frequently Services need live data for testing and development Services have complex business logic with volatile test data External Document © 2018 Infosys Limited
---
Page: 4 / 4
---
Services Identification & Solution Definition Build & Deploy Maintain Services (Heal Services) Analyze architecture Landscape Virtual Service Design Documentation Continuous improvements and enhancements to support new requirements Identify constrained systems Build Virtual Services Republish new versions and refresh test data Formulate high level Strategy Test Virtual Services to ensure completeness & correctness Decommission end of life virtualized services Perform tool fitment analysis Deploy & Publish Virtual Endpoints Feasibility analysis Service Virtualization Approach Infosys recommends a three-step process for service virtualization as depicted below: Conclusion Service Virtualization has evolved into a key component of software development lifecycle. Effective deployment of service virtualization can bring immense benefits with improved quality of the code and faster delivery. However, all services need not to be virtualized and project teams should plan switch to live services at an appropriate juncture to ensure applications are tested in the ‘LIVE’ environment and a reasonable balance is maintained between the real and virtual world of services.<|endoftext|>© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Do more with less in software testing
Author: Infosys Limited
Format: PDF 1.6
---
Page: 1 / 4
---
WHITE PAPER DO MORE WITH LESS IN SOFTWARE TESTING Abstract Faced with pressure to deliver more despite ever-shrinking budgets and shorter timelines, most companies struggle to balance the cost of innovation with business demands. Testing is a critical area where neither speed nor quality of output can be compromised as this leads to negative business impact. This paper explains some cost-effective strategies that enable testing organizations to improve efficiency within their testing teams while ensuring high-quality output.<|endoftext|>Indumathi Devi
---
Page: 2 / 4
---
External Document © 2018 Infosys Limited Introduction Even as the demands for agile software increase exponentially, testing budgets continue to shrink. Software testing teams within most organizations struggle to deliver quality software in shorter timelines and tighter budgets. Further, most software tests tend to reside in silos making integration, collaboration and automation challenging. Thus, organizations need innovative testing solutions and strategies to balance quality, speed and cost.
---
Page: 3 / 4
---
External Document © 2018 Infosys Limited Agile testing solutions and strategies 1. Test with an end-user mindset The job of testing goes beyond checking software against preset requirements or logging defects. It involves monitoring how the system behaves when it is actually being used by an end-user. A common complaint against testers is that they do not test software from the perspective of business and end-users. Effective risk- based testers are those who understand the system’s end-users and deliver value- added testing services that ensure quality products and meet clients’ expectations. To do this, testers must evaluate products by undertaking real business user journeys across the system and test commonly used workflows in short testing windows. By mimicking real user journeys, such testers identify higher number of critical production defects.<|endoftext|>2. Empower cross functional teams Agile and DevOps methodologies in software testing are forcing teams to work together across the software development lifecycle (SDLC). Engendering test independence is not about separating testing from development as this can lead to conflicts between developers and testers. Most teams have a polarized dynamic where testers search for defects and must prove how the program is erroneous while programmers defend their code and applications. Cross-functional teams eliminate such conflict by gathering members with different specializations who share accountability and work toward common goals. For instance, ‘testers’ are simply team members with testing as their primary specialization and ‘programmers’ are those within the team who specialize in coding. This team structure encourages people to work with a collaborative mindset and sharpen their expertise. Such teams can be small, with as few as 4-8 members who are responsible for a single requirement or part of the product backlog. Cross-functional teams provide complete ownership and freedom to ensure high-quality output – an important step to realizing the potential of agile. 3. Automate the automation Running an entire test suite manually is time-consuming, error-prone and, often, impossible. While some companies are yet to on-board agile and DevOps capabilities, others have already integrated the practice of continuous integration (CI) and continuous delivery (CD) into their testing services. Irrespective of the level of DevOps maturity, CI/CD will provide only limited value if not paired with the right kind and degree of testing automation. Thus, organizations need a robust, scalable and maintainable test automation suite covering the areas of unit, API, functional, and performance testing. Automated testing saves effort, increases accuracy, improves test coverage, and reduces cycle time. To ensure automation success, organizations must focus on: • Automating the right set of tests, particularly business-critical end-user journeys and frequently used workflows • Integrating various components that may change continuously and need to be regressed frequently • Automating data redundant tests • Using the right set of automation tools and frameworks • Moving beyond just the user interface and automating unit, APIs and non- functional tests • Continuous maintenance and use of automated test suite On its own, test automation is important. However, when automation is integrated with a CI/CD pipeline to run every time new code is pushed, the benefits in time, cost and quality are multiplied.
---
Page: 4 / 4
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected Conclusion About the author In a fast-paced and evolving digital world, companies want their IT partners to do more with less. When it comes to software testing, this places heavy pressure on software testers who are required to push high-quality code faster at lower cost. To streamline software testing, organizations need an approach where their testers adopt an end-user mindset by testing real user journeys and critical business transactions. They must also create efficient cross-functional teams that collaborate to achieve common goals and deliver value-added testing services. Finally, automating different layers of testing and practicing CI/CD will facilitate continuous testing and reduce time-to-market. These cost-effective strategies will help software testing professionals improve productivity and deliver more with less.<|endoftext|>Indumathi Devi is a Project Manager at Infosys with over 15 years of experience in software testing. She has effectively executed multiple software testing projects. Working in different domains and technology stacks, she has assisted numerous clients in implementing robust software testing solutions for manual as well as automated testing.<|endoftext|>
***
|
# Infosys Whitepaper
Title: Solving the Test Data Challenge to Accelerate Digital Transformation
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 12
---
WHITE PAPER SOLVING THE TEST DATA CHALLENGE TO ACCELERATE DIGITAL TRANSFORMATION Abstract Organizations are increasingly adapting to the need to deliver products and services faster while continuously responding to market changes.<|endoftext|>In the age of mobile apps, test automation is not new. But traditional test data management (TDM) approaches are unable to help app development teams address modern delivery challenges. Companies are increasingly struggling to keep up with the pace of development, maintain quality of delivery, and minimize the risk of a data breach.<|endoftext|>This white paper illustrates the need for a smart, next-gen TDM solution to accelerate digital transformation by applying best practices in TDM, zero- trust architecture, and best-in-class test data generation capabilities. {{ img-description : two people looking out of a glass door in a building, in the style of light navy and dark azure, security camera art, webcore, uniformly staged images, selective focus, grainy, masculine (to left) }}
---
Page: 2 / 12
---
External Document © 2021 Infosys Limited
---
Page: 3 / 12
---
External Document © 2021 Infosys Limited Traditional Test Data Management.........................................................................................4 Why the New Normal was not Enough?.................................................................................4 Five Key Drivers and Best Practices in Test Data Management.........................................5 Future-proof Test Data through Next-gen TDM Innovation..............................................8 Accelerate through Next-gen TDM Reference Architecture...............................................9 The Way Forward - Building Evolutionary Test Data for your Enterprise...................... 11 About the
Authors................................................................................................................... 12 Figure 1. Key focus areas emerging in test data management.........................................4 Figure 2. Key drivers and best practices in TDM...................................................................5 Figure 3. Zero trust architecture..............................................................................................6 Figure 4. Stakeholder experience............................................................................................7 Figure 5. Focus areas of Infosys Next-Gen TDM....................................................................8 Figure 6. Infosys Next-Gen TDM reference architecture.....................................................9 Figure 7. Contextual test data and its different formats.................................................. 10 Table of Contents Table of Figures
---
Page: 4 / 12
---
External Document © 2021 Infosys Limited Traditional Test Data Management Test data management (TDM) should ensure that test data is of the highest possible quality and available to users. In the digital age, managing test data using traditional TDM practices is challenging due to its inability to accelerate cloud adoption, protect customer data, provide reliable data, avoid data graveyards, ensure data consistency, and automate and provision test data.<|endoftext|>Why the New Normal was not Enough? While the ‘new normal’ has become a catchword in 2021, in the world of testing, this ‘normal’ was not effective for many organizations. The pressure to adapt to changing customer expectations, new technology trends, changing regulatory norms, increased cybersecurity threats, and scarcity of niche skills has raised many challenges for organizations. In light of this, many are wondering whether they should revisit their test data strategy. Figure 1. Key focus areas emerging in test data management As time-to-market for products and services becomes critical, test data generation and provisioning emerge as bottlenecks to efficiency. Further, test data management has been represented as the weak link for organizations looking to accelerate digital transformation through continuous integration and delivery. High quality test data is a prerequisite to train machine learning (ML) models for accurate business insights and outcome predictions. To build a competitive difference, organizations today are investing in three key focus areas in test data management (refer Figure 1): • New business models – With a strong focus on customer experience, organizations must adopt new business models and accelerate innovation. There is a need to generate data that can be controlled and is realistic as well as accurate to meet real- world production needs. • Hyper-productivity – Automation and iterative agile processes push the need for better testing experiences with faster and more efficient data provisioning, allowing organizations to do more with less.<|endoftext|>• New digital workplace – Millions of employees are working from home. Organizations must focus on building a secure, new-age digital workplace to support remote working.<|endoftext|>Changing Customer Expectations Head winds – New Tech Trends Regulatory Changes Cyber Security Threats Skill Scarcity APPS Hyper productivity - Agile New Business Models - Multi-cloud environment Employees Partners Customers New Digital Workplace Community Data Privacy and Security
---
Page: 5 / 12
---
External Document © 2021 Infosys Limited Five Key Drivers and Best Practices in Test Data Management Companies are increasingly struggling to keep up with the pace of development, maintain quality of delivery, and achieve absolute data privacy. On-demand synthetic test data is a clear alternative to the traditional approach of sub-setting, masking, and reserving production data for key business analytics and testing. In this context, three key questions to ask are: 1. What are the drivers and best practices to be considered while building a test data strategy? 2. How can CIOs decide what is the right direction for their test data strategy? 3. What are the trade-offs in test data management? There are five elements – cost, quality, security and privacy, tester experience, and data for AI – that drive a successful test data management strategy. Understanding the best-practices around these will guide CIOs in making the right decision. Figure 2. Key drivers and best practices in TDM Accelerate Next-gen TDM Adapting for agile & devOps | Self-serviced data provisioning | Increased test data automation Test data automation {{ img-description : a man in glasses is holding a tablet in front of a room full of server racks, in the style of light gray and indigo, uniformly staged images, restored and repurposed, ue5, traditional }}
---
Page: 6 / 12
---
External Document © 2021 Infosys Limited Key Drivers Impact on Test Data Strategy Best Practices 1. Cost What is the return on investment (ROI) and acceptable investment to create, manage, process, and, most importantly, dispose of test data? Production data must be collected, processed, retained, and disposed of. The processing and storage cost must offset the investment in TDM products. Procurement, customization, and support costs need to be considered.<|endoftext|>• Test data as a service – Test data on cloud with a subscription for testers can lower the provisioning of full-scale TDM.<|endoftext|>2. Quality Do we have the right quality of data? Can we get complete control over the data? Can we generate test data in any format? Testers have very limited control over the data provided by production. The test data is usually a subset of data from production and cannot cater to all the use cases including negative and other edge use cases. Further, there is a need to generate electronic data interchange (EDI) files, images, and even audio files for some of the use cases.<|endoftext|>• TDM suite can help build a subset of data designed with realistic and referentially intact test data from across the distributed data sources with minimal cost and administrative effort.<|endoftext|>• Synthetic data generators should have the breadth to cover key data types and file formats along with the ability to generate high-quality data sets, whether structured or unstructured, across images, audio files, and file formats.<|endoftext|>3. Security and privacy Do we have the right data privacy controls while accessing data for testing? How do we handle a data privacy breach? The focus on privacy and security of the data used for testing is increasing. Complying with GDPR and ensuring the right data privacy controls is a catalyst for organizations to move away from using direct production data for testing purposes. There is increased adoption of masking, sub-setting, and synthetic data generation to avoid critical data breaches when using sensitive customer, partner, or employee data.<|endoftext|>• Zero trust architecture provides a data-first approach, which is secure by design for each workload and identity-aware for every persona in the test management process including testers, developers, release managers, and data analysts.<|endoftext|> Figure 3. Zero trust architecture • To ensure security of sensitive information, organizations can create realistic data in non-production environments without exposing sensitive data to unauthorized users. Enterprises can leverage contextual data masking techniques to anonymize key data elements across an enterprise.<|endoftext|> |
Continue # Infosys Whitepaper
Analyze Discover Data Masking Data Generation Data Copy / Sub-Set Validate Export & Refresh Virtualize Production Production Clone Production Sub-set Gold Copy Gold Copy (Sub-set) Gold Copy (SyntheticData) Masking Data Generation Sub-setting Non-Production DB Logs Files Virtualized Clone Developer Tester Release Manager Data Privacy Architecture Sensitive Data Discovery Self Service Virtualize Test Data Set up Contextual Synthetic Data Data Sub-setting Test Environment Provisioning Monitoring Diferential Privacy Data Scientist / Analyst
---
Page: 7 / 12
---
4. Tester experience Are we building the right experience for the tester? Is it easy for testers to get the data they need for their tests? Customers struggle to meet the agile development and testing demands of iterative cycles. Testers are often forced to manually modify the production data into usable values for their tests. Teams struggle to effectively standardize and sub-set the production data that has been masked and moved to test data.<|endoftext|>Figure 4. Stakeholder experience • Test data automation puts the focus on tester experience by enabling a streamlined and consistent process with automated workflows of self-service test data provisioning • Test data virtualization allows applications to automatically deliver virtual copies of production data for non-production use cases. It also reduces the storage space required.<|endoftext|>5. Data for AI Do we understand insights generated by the data? The probabilistic nature of AI makes it very complex to generate test data for training AI models. • Adopt mechanisms for data discovery, exploration and due diligence. Data resides in different formats across systems. Enterprises must identify patterns across multiple systems and file formats and provide a correct depiction of the data types, locations, and compliance rules according to industry-specific regulations. They should also focus on identifying patterns, defects, sub-optimal performance, and underlying risks in the data.<|endoftext|>• For data augmentation, analysts and data scientists can be provided with datasets for analysis. The datasets must be resistant to reconstruction through differential privacy for effective data privacy protection.<|endoftext|>External Document © 2021 Infosys Limited Stakeholder Experience Developer Not able to get the right data for development Customer We are prepared to move the IT delivery to another consulting frm if you cannot handle Data Privacy and Security IT Head Not able to provision and create gold copy for testing Tester We need the right test data without sensitive data, else we cannot fnish testing Business Sponsor We want zero defects, and no data security breaches in Product Development Scrum Master We had to meet critical business requirements; we could not provision the right data for Testing {{ img-description : two people are looking at computers in a server room, in the style of miscellaneous academia, light silver and indigo, glazed surfaces, iso 200, rubens, weathercore, traditional craftsmanship }}
---
Page: 8 / 12
---
1. Tester user experience – Testers need to assess business and technical requirements from the perspective of testability as well as end users. Infosys Next-Gen TDM provides a framework that includes testers and gives them a 360-degree view of the TDM process.<|endoftext|>2. AI-driven data discovery – Modern test data resides on a tower of abstractions, patterns, test data sources, and privacy dependencies. One of the key features of Infosys Next-Gen TDM is smart data discovery of structured and unstructured data using AI. This helps uncover: • Sensitive data (PII/PHI/SPI) to avoid data privacy breaches • Data lineages to build the right contextual data while maintaining referential integrity across child and parent tables 3. Data virtualization – This is needed for organizations to access heterogeneous data sources. Infosys Next-Gen TDM provides a lightweight query engine that enables testers to mine lightweight copies that are protected.<|endoftext|>4. Data provisioning – There are numerous challenges faced by testing teams in getting access to the right data. Large enterprises need approvals to access data from businesses and app owners. Infosys Next-Gen TDM provides an automated workflow for intelligent data provisioning. With this, testers can request data and manage entitlements as well as approvals through a simplified UX.<|endoftext|>5. Privacy-preserving synthetic data – It is important to protect personal data residing in the data sources being curated for test data. There is always a risk of personal data being compromised when there is a large amount of training or testing data involved. It can result in giving too much access to sensitive information. Improper disclosure of such data can have adverse consequences for a data subject’s private information. It may put data subject at more risk of stalking and harassment. Cybercriminals can also use data subject’s bank details or credit card details to degrade subject’s credit rating. Privacy-preserving synthetic data focuses on ensuring that the data is not compromised while maximizing the utility of the data. Differential privacy prevents linkage attacks, which cause records to be re-identified even after being anonymized for testing.<|endoftext|>6. Smart augmentation of contextual datasets – Dynamic data can change its state during an application testing process. To generate dynamic data, the tester should be able to input the business rules and build both positive and negative test cases. Infosys Next-Gen TDM provides a configurable rules engine that generates test data dynamically and validates this against changing business rules. 7. Image and audio file generation – Infosys Next-Gen TDM can create audio files and image datasets for AR/VR testing using deep learning capabilities.<|endoftext|>8. Special file formats – Customers need access to special communication formats such as JSON, XML, and SWIFT, or specific ones such as EDI files. Infosys Next-Gen TDM provides templates for generating various file formats.<|endoftext|>9. Intelligent automation – Built-in connectors for scheduling the processes of data discovery, protection, and data generation allows testers to model, design, generate, and manage their own test datasets. These connectors include plug-ins to the CI/CD pipeline, which integrate data automation and test automation.<|endoftext|>Future-proof Test Data through Next-gen TDM Innovation Every organization needs simplified testing models that can support a diverse set of data types. This has never been a higher priority. Infosys Next-Gen TDM supports digital transformation by focusing on 9 key areas of innovation (see Figure 5). The offering leverages the latest advances from data science in test data management, giving enterprises the right tools to engineer appropriate test data.<|endoftext|>Figure 5. Focus areas of Infosys Next-Gen TDM Model Development Training Testing Discover & Plan Explore Enrich Manage 2. AI Driven Data Discovery 1. Tester UX 9. Intelligent Automation 3. Data Virtualization 4. Data Provisioning 5. Privacy Preserving Synthetic Data 6. Smart Augmentation of Training Data Sets 7. Image, Audio 8. Special
Formats (EDI, SWIFT) External Document © 2021 Infosys Limited
---
Page: 9 / 12
---
Accelerate through Next-Gen TDM Reference Architecture As organizations look to deliver high-quality applications at minimum cost, they need a test data management (TDM) strategy that supports both waterfall and agile delivery models. With the rapid adoption of DevOps and increased focus on automation, there is also increasing demand for data privacy. Enterprises are fast moving from traditional TDM to modern TDM in order to meet the needs of the current development and testing landscape. Infosys Next-Gen TDM focuses on increasing automation and improving the security of test data across cloud as well as on-premises data sources. Figure 6. Infosys Next-Gen TDM reference architecture Production Non-Production on Premise Data Source Logs Files Non-Production on Cloud Tester Developer ReleaseManager Data Scientist Self Service Portal Next Gen TDM Cloud Apps Commercial Testing Tools CI/CD Pipeline Data Masking Data Discovery Data Generation Gold Copy Data Mining Data Sub-setting Data Virtualization Data Quality Diferential Privacy Automated workfow Data Reservation Data Provisioning Data Generation Data base Refresh Unit and Functional Testing Integration, Regression and Performance Testing External Document © 2021 Infosys Limited
---
Page: 10 / 12
---
1. User experience – Infosys Next-Gen TDM focuses on building specific data experiences for each persona, i.e., tester, release manager, developer, and data scientist. Its self-service capabilities offer simplified intent-driven design for better data provisioning and generation.<|endoftext|>2. Contextual test data generation – There is a library of algorithms that helps teams generate different data types and formats including images, EDI files, and other unstructured data.<|endoftext|>Figure 7. Contextual test data and |
Continue # Infosys Whitepaper
its different formats The focus areas in digital transformation through this approach are: 3. Data protection for multiple data sources – Infosys Next-Gen TDM connects to multiple data sources on cloud and on-premises. It provides a framework of reusable components for gold copy creation and sub-set gold copy. Data is masked and protected through a library of algorithms for various data types.<|endoftext|>4. Data augmentation – The accuracy of AI and ML algorithms depends on the quality of training data and the scale of data used. The larger the volume and more diverse the training data used, the more accurate and robust the model will be. Infosys Next-Gen TDM generates high volumes of data based on a predefined data model, data attributes, and patterns of data variation for training, validating, and testing AL/ ML algorithms.<|endoftext|>5. Integration through external tools – To enable full-fledged DevSecOps, Infosys Next-Gen TDM has a library of adaptors that connect to the various orchestration tools in the automation pipeline.<|endoftext|>Diferential privacy & resistance to reconstruction Data protection Generalization Perturbing data Structured data Pre-set Files Unstructured Data Images Communication
Format Provide structured data for analytics Data generation of fles Logs and chat transcripts Provide images for UX testing / AR-VR Kits XML, JSON, SWIFT External Document © 2021 Infosys Limited
---
Page: 11 / 12
---
The Way Forward: Building Evolutionary Test Data for Your Enterprise Production and synthetic test data can coexist in a testing environment, either to optimize their role in various testing operations or as part of a transition from one to the other. This may require the organization to think differently about test data and develop a roadmap for long-term continuous testing. To solve test data challenges, enterprises should focus on using evolutionary architecture to build contextual test data using a three-pronged strategy: • AI-assisted data prep: Fitness functions – Focus on identifying the key dimensions of data that need to be generated for testing. Enhance feature engineering across multi-role teams to build the key fitness functions and models for data generation across each data domain and data type.<|endoftext|>• Focus on incremental change – Help data architects focus on incremental change by defining each stage of test data management based on the tester’s experience. This will enable testers to selectively pick the right data for different deployment pipelines running on different schedules. Partitioning test data around operational goals allows testers to track the health and operational metrics of the test data. • Immutable test data suite – Focus on building an immutable test data environment with best-of-breed tools and in-house innovation to ensure the right tool choice for test data generation. This helps enterprises choose the tools best suited to their need, thereby optimizing total cost of ownership (TCO).<|endoftext|>External Document © 2021 Infosys Limited {{ img-description : two businesspeople standing in a rack of servers, in the style of found-object-centric, night photography, metalworking mastery, studyplace, ferrania p30, security camera, 32k uhd (to right) }}
---
Page: 12 / 12
---
© 2021 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected About the
Authors Avin Sharma Consultant at Infosys Center for Emerging Technology Solutions (ICETS) He is currently part of the product team of Infosys Enterprise Data Privacy Suite, Data for Digital ICETS. His focus includes product management, data privacy, and pre-sales.<|endoftext|>Ajay Kumar Kachottil Technology Architect at Infosys with over 13 years of experience in test data management and data validation services.<|endoftext|>He has implemented multiple test data management solutions for various global financial leaders across geographies.<|endoftext|>Karthik Nagarajan Industry Principal Consultant at Infosys Center for Emerging Technology Solutions (ICETS).<|endoftext|>He has more than 15 years of experience in customer experience solution architecture, product development, and business development. He currently works with the product team of Infosys Enterprise Data Privacy Suite, Data for Digital ICETS, on data privacy, data augmentation, and CX strategy.<|endoftext|>
***
|
# Infosys Whitepaper
Title: Test data management in software testing life cycle- Business need and benefits in functional, performance, and automation testing
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 12
---
WHITE PAPER TEST DATA MANAGEMENT IN SOFTWARE TESTING LIFE CYCLE- BUSINESS NEED AND BENEFITS IN FUNCTIONAL, PERFORMANCE, AND AUTOMATION TESTING Praveen Bagare (Infosys) and Ruslan Desyatnikov (Citibank)
---
Page: 2 / 12
---
Abstract The testing industry today is looking for ways and means to optimize testing effort and costs. One potential area of optimization is test data management. Testing completeness and coverage depends mainly on the quality of test data. It stands to reason that without high quality data testing assurance is unattainable. A test plan with several comprehensive scenarios cannot be executed unless appropriate data is available to run the scenarios.<|endoftext|>The best data is found in production since these are actual entries the application uses. While using production data, it is always prudent to create a sub-set of the data. This reduces the effort involved in test planning and execution and helps achieve optimization. However, live data is not always easily available for testing. Depending on the business, privacy and legal concerns may be associated with using live data. Often the data is not complete and therefore cannot be used for testing. It is best to avoid the use of raw production data to safeguard business and steer clear of expensive lawsuits. The challenge of TDM lies in obtaining the right data effectively. Before proceeding on this path, we need to find answers to some pertinent questions: Will there be a positive return on the investment? Where do we start implementing Test Data Management (TDM)? Should we start with functional testing or non-functional testing? Can test automation help? The practice of not including TDM steps in the Testing Life Cycle often leads to ignorance towards TDM on part of the testing team. This paper attempts to explain why testers in the functional, non-functional and automation test arenas need the TDM service. We also discuss the test data challenges faced by testers and describe the unparalleled benefits of a successful TDM implementation.<|endoftext|>TDM is fast gaining importance in the testing industry. Behind this increasing interest in TDM are major financial losses caused by production defects, which could have been detected by testing with the proper test data. Some years ago, test data was limited to a few rows of data in the database or a few sample input files. Since then, the testing landscape has come a long way. Now financial and banking institutions rely on powerful test data sets and unique combinations that have high coverage and drive the testing, including negative testing. TDM introduces the structured engineering approach to test data requirements of all possible business scenarios. Large financial and banking institutions also leverage TDM for regulatory compliance. This is a critical area for these institutions due to the hefty penalties associated with non-compliance. Penalties for regulatory non-compliance can run into hundreds of thousands of dollars or more. Data masking (obfuscating) of sensitive information and synthetic data creation are some of the key TDM services that can assure compliance.<|endoftext|>Significance of Test Data Management External Document © 2018 Infosys Limited
---
Page: 3 / 12
---
Test data is any information that is used as an input to perform a test. It can be static or transactional. Static data containing names, countries, currencies, etc., are not sensitive, whereas data pertaining to Social Security Number (SSN), credit card information or medical history may be sensitive in nature. In addition to the static data, testing teams need the right combination of transaction data sets/conditions to test business features and scenarios. TDM is the process of fulfilling the test data needs of testing teams by ensuring that test data of the right quality is provisioned in suitable quantity, correct format and proper environment, at the appropriate time. It ensures that the provisioned data includes all the major flavors of data, is referentially intact and is of the right size. The provisioned data must not be too large in quantity like production data or too small to fulfill all the testing needs. This data can be provisioned by either synthetic data creation or production extraction and masking or by sourcing from lookup tables.<|endoftext|>TDM can be implemented efficiently with the aid of well-defined processes, manual methods and proprietary utilities. It can also be put into practice using well-evolved TDM tools such as Datamaker, Optim or others available in the market.<|endoftext|>A TDM strategy can be built based on the type of data requirements in the project. This strategy can be in the form of: • Construction of SQL queries that extract data from multiple tables in the databases • Creation of flat files based on: Mapping rules Simple modification or desensitizing of production data or files An intelligent combination of all of these What is Test Data Management? External Document © 2018 Infosys Limited
---
Page: 4 / 12
---
Some of the most common challenges faced by testing teams while sourcing test data are: • Test data coverage is often incomplete and the team may not have the required knowledge.<|endoftext|>• Clear data requirements with volume specifications are often not gathered and documented during the test requirements phase.<|endoftext|>• Testing teams may not have access to the data sources (upstream and downstream).<|endoftext|>• Data is generally requested from the development team which is slow to respond due to other priority tasks.<|endoftext|>• Data is usually available in large chunks from production dumps and can be sensitive in nature, have limited coverage or may be unsuitable for the business scenarios to be tested.<|endoftext|>• Large volumes of data may be needed in a short span of time and appropriate tools may not be at the testing team’s disposal.<|endoftext|>• Same data may be used by multiple testing teams, in the same environment, resulting in data corruption.<|endoftext|>• Review and reuse of data is rarely realized and leveraged.<|endoftext|>• Testers may not have the knowledge of alternate data creation solutions using a TDM tool.<|endoftext|>• Logical data relationships may be hidden at the code level and hence testers may not extract or mask all the referential data.<|endoftext|>• Data dependencies or combinations to test certain business scenarios can add to the difficulties in sourcing test data.<|endoftext|>• Testers often spend a significant amount of time communicating with architects, database administrators, and business analysts to gather test data instead of focusing on the actual testing and validation work.<|endoftext|>• A large amount of time is spent in gathering test data.<|endoftext|>• Most of the data creation happens during the course of execution based on learning.<|endoftext|>• If the data related to defects is not found during testing, it can cause a major risk to production.<|endoftext|>The Challenges of Test Data Sourcing External Document © 2018 Infosys Limited
---
Page: 5 / 12
---
TDM Offers Efficient Solutions and Valuable Benefits An effective TDM implementation can address most of the challenges mentioned above. Some of the key benefits that a business can gain by leveraging the TDM services are: Superior quality Minimum time Reduced cost Less resources • Optimal data coverage is achieved by the TDM team through intelligent tools and techniques based on data analysis strategies • The TDM service employs a dedicated data provisioning team with agreed service-level agreements (SLAs) ensuring prompt data delivery • Condensed test design and data preparation effort helps achieve cost savings • Database or file access provided to the TDM team facilitates data privacy and reuse • Test data requirements from the TDM team enable the testing team to capture these effectively during the test planning phase. Version- controlled data requirements and test data ensure complete traceability and easier replication of results • Compact test design and execution cycles can be achieved for reduced time to market • Minimized test data storage space leads to reduction of overall infrastructure cost • Professionals with specialized skills, sharp focus on Test Data and access to industry standard tools contribute to the success of TDM • Detailed analysis and review of data requirements ensure early identification of issues and resolution of queries • Automated processes lead to less rework and reduced result replication time • The TDM team also wears the system architect’s |
Continue # Infosys Whitepaper
hat, thus understanding data flow across systems and provisioning the right data • Synthetic data can be created from the ground up for new applications • TDM tools such as Datamaker can speed up scenario identification and creation of the corresponding data sets • Errors and data corruption can be reduced by including defined TDM processes in the Testing Life Cycle and by adopting TDM tools • Clear data security policies increase data safety and recoverability • Well-defined process and controls for data storage, archival and retrieval support future testing requirements External Document © 2018 Infosys Limited
---
Page: 6 / 12
---
Coverage: Exposure to all the possible scenarios or test cases needs to be the key driving factor for data provisioning in functional testing. The data provisioned for this testing must cover: • Positive scenarios (valid values that should make the test case pass) • Negative scenarios (invalid values that should result in appropriate error handling) • Boundary conditions (data values at the extremities of the possible values) • All functional flows defined in the requirement (data for each flow) Low volumes: Single data sets for each of the scenarios are sufficient for the need. Repetitive test data for similar test cases may not be required and can prove to be a waste of time. This can help reduce the execution time significantly.<|endoftext|>High reuse: Some test data such as accounts, client IDs, country codes, etc., can be reused across test cases to keep the test data pool optimized. Static and basic transaction data for an application can be base-lined so as to be restored or retrieved for maintenance release testing at regular intervals, depending on the release frequency.<|endoftext|>Tools or utilities: As utilities have the capacity to create large volumes of the same type of data, they may not be very useful in functional data preparation. If the tool is capable of creating a spectrum of data to meet all the data requirements and the data can be reused across releases, then it is beneficial to use the utility or tool for TDM implementation.<|endoftext|>Optimal Environment Size Data in the testing environment should be a smart subset (a partial extract of the source data based on filtering criteria) of production or synthetic data. It should comprise data for all testing needs and be of minimal size to avoid hardware and software cost overhead. Sub-setting referentially intact data in the right volume into testing environments is the solution to overcome the problem of oversized, inefficient and expensive testing environments. This is the key to reduce the test execution window and minimize data- related defects with the same root cause.<|endoftext|>Data Requirements Gathering Process During test case scripting, the test data requirements at the test-case level should be documented and marked as reusable or non-reusable. This ensures that the test data required for testing is clear and well documented. A simple summation of the same type of test data provides the requirement of test data that needs to be provisioned.<|endoftext|>A sample test data requirement gathering, as per this method, has been demonstrated in Appendix I. Feasibility Check TDM is suitable for functional testing projects that: • Spend over 15% of the testing effort on data preparation or data rework • Use regression test cases which are run repeatedly across releases (so that the test data methodology identified can be reused) • Indicate that a high data coverage TDM solution can be identified and implemented TDM in Functional Testing Most challenges mentioned earlier are part of the day-to-day struggle of functional testing. The most commonly seen challenges are: low coverage, high dependency, access limitations, oversized testing environment, and extended test data sourcing timelines. Successful implementation of TDM in functional testing projects can alleviate most of these issues and assure the completeness of testing from the business perspective.<|endoftext|>In functional testing, TDM is governed by numerous factors highlighted below.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 7 / 12
---
Test data preparation in performance testing is most impacted by issues like large volume requirements with significant coverage, high data preparation time, limited environment availability and short execution windows. The TDM team with their tools and techniques can provide solutions for bulk data generation with quick refresh cycles ensuring on-time high quality data provisioning to address these steep demands.<|endoftext|>Test data for performance testing is characterized by:.<|endoftext|>High volumes of data: Performance testing always needs large volumes of test data by virtue of the way it works. Multiple users are simultaneously loaded onto an application to run a flow of test executions. Hence multiple data sets are required in parallel, in large numbers, based on the workload model. Quick consumption of test data: Since the load or stress on the application is induced by multiple users, the data provisioned for them is consumed rapidly. This leads to quick exhaustion of large volumes of the test data.<|endoftext|>Very short data provisioning cycle: Since the data is consumed very quickly in performance testing, a new cycle of execution requires the data to be replenished before start. This implies that the TDM strategy must make sure that the test data can be recreated, extracted again or provisioned in a short period of time so as to avoid the impact of the unavailability of data on the testing cycle.<|endoftext|>Workload distribution: Performance testing in today’s time does not use one type of data repeatedly. If the use cases comprise multiple types of data, with a corresponding percentage of each occurrence, a similar workload model is built for the testing environment. Thus smart data needs to be provided for such a workload model which covers the multiple types. This multiplies the complexity of data creation.<|endoftext|>Feasibility Check TDM is definitely suitable for most of the performance testing projects. The strategy needs to be well designed to make sure that the challenges listed above are addressed appropriately and a satisfactory return on investment (ROI) is achieved soon.<|endoftext|>TDM in Performance Testing External Document © 2018 Infosys Limited
---
Page: 8 / 12
---
Some of the biggest challenges in automation testing are associated with creating test data. Included in this challenge is the inability to create data quickly from the front-end, fast burning up of large volumes of data during test runs, limited access to dynamic data and partial availability of the environment. Implementing a well-designed TDM strategy can support multiple iterations of dynamic data in short intervals of time by synthetically creating or extracting data, using TDM tools.<|endoftext|>Test data for automation testing is driven by factors such as: Automation of test data creation: The data required for automation testing is usually created by using one of the automated processes, either from user interface (UI) front-end or via create or edit data operations in the database. These methods are time-consuming and may require the automation team to acquire application as well as domain knowledge.<|endoftext|>Fast consumption of test data: Similar to performance testing, automation testing also consumes data at a very quick pace. Hence the data provisioning strategy must accommodate fast data creation with a relatively short life cycle.<|endoftext|>High coverage: As with functional testing automation testing also requires test data for each of the automated scenarios. The data requirement may be restricted to the regression test pack and yet covers a large spectrum of data.<|endoftext|>Feasibility Check TDM as process implementation is certainly recommended for an automation project. The automation team must provide the data requirement in clear terms and the TDM team can ensure provisioning of this data before the test run begins. Automation tools have abilities to create, mock and edit data but the TDM team’s expertise and tools can add significant value to the proposition.<|endoftext|>TDM in Automation Testing External Document © 2018 Infosys Limited
---
Page: 9 / 12
---
Conclusion In functional testing, increasing data coverage plays a significant role in providing the TDM value-add. The sheer volume of test data that is repeatedly used in the regression suites make it an important focal area from the ROI viewpoint. The right TDM tools can help provision a spectrum of data and ensure continuous ROI in each cycle.<|endoftext|>TDM implementation in performance testing projects can deliver quick benefits and the improvement can be significantly highlighted as large volumes of similar data can be created swiftly and efficiently. Automation testing can benefit from TDM implementation. Tools such as Quick Test Professional (QTP) can create data via user interface but need significant functional knowledge and are slow in nature. TDM solutions can save time and cost by keeping the data ready. Robust data creation methods and tools can be used to |
Continue # Infosys Whitepaper
achieve these goals.<|endoftext|>Based on our learning, interactions and experience gleaned from the TDM world, we can confidently affirm that functional, automation and performance testing can leverage TDM implementation to overcome their respective challenges and achieve optimization. Each type of testing presents its own unique challenges and benefits, but there is a common theme – TDM is a major enhancement and addition to the tools and techniques available to the testing team. This practice can help realize gains at the bottom-line with cost reductions, improved turnaround time and fewer data related defects in test and in production.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 10 / 12
---
We have identified the following set of metrics to be captured before and after implementing TDM practices to measure the ROI and benefits: Test data coverage Percentage of test data related defects Test data management effort (percentage and time in hours) We wish you success in your TDM implementation.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 11 / 12
---
An illustration of ‘Sample Test Data Requirement Gathering’ The last three columns depicted in the table below should be a part of the test case documentation and must be updated during test case authoring.<|endoftext|>Data requirement summary for the above illustration: • Two ‘NAM Bank Account Number’ ‘with balance > $100,000’ • Two ‘ASIA Bank Account Number’ • One client ID Reference http://www.gxsblogs.com/wp-content/blogs.dir/1/files/SEPA-Penalties-Table1.png?8f0a21 Appendix I No.<|endoftext|>Test Case Test Data Requirement Reusable? Remarks 1 Test Case 01 NAM Bank Account Number Y with balance > $1,000 2 Test Case 02 Client ID N - 3 Test Case 03 NAM Bank Account Number Y with balance > $100,000 4 Test Case 04 ASIA Bank Account Number Y Account open date > 01 Jan 2013 5 Test Case 05 ASIA Bank Account Number Y - External Document © 2018 Infosys Limited
---
Page: 12 / 12
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Test Factory Setup for SAP Applications
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
PERSPECTIVE TEST FACTORY SETUP FOR SAP APPLICATIONS - Barry Cooper-Brown, Diageo Chandur Ludhani and Sailesh Chandrasekaran, Infosys Abstract Emergence of IT enabled business growth is compelling organizations to give testing and testing related strategies the much needed importance. By now, it has been reaffirmed that the cost of a single development defect can snowball to many times the original cost, if not discovered until the QA phase of testing and eventually showing up in the Production Environment.<|endoftext|>However, when it comes to SAP there are unique testing challenges to deal with. The challenges of SAP testing will present you with both tradeoffs that need to be considered and the choices that need to be made about the kind of testing that is needed for your QA organization. The following point-of-view has been written based on the engagement between a major drinks manufacturer and Infosys, describing a successful approach of setting up a Test Factory to manage testing of SAP applications.<|endoftext|>
---
Page: 2 / 8
---
Challenges in managing changes in SAP application which indirectly gets mapped to reaping benefits for all its stakeholders namely the customers, employees, shareholders, etc. Often these investments do not bear fruits and are in turn viewed as a cost. For example, a delay in readying the application for regulatory changes, could lead to serious consequences for the organisation in the region within which the changes were mandated. Another such example could be with encountering incidents in the live environment or facing downtime with particular applications leading to severe business disruption. Organisations are constantly on the lookout for innovative ways to help adapt quickly to these changes in the SAP business processes. Their ability to do so also facilitates: • Improvement of delivery confidence with every change deployed • Reduction in cost of every change implemented • Ability to contract the overall lead time required for such activities thereby allowing more frequent releases An organisation’s inability to do so, leads to a host of challenges to the employees who interface with SAP for their day-to-day activities: • Long time for deploying the SAP changes, means more business application downtime • Leakage of defects to production– hampering their day to day operations • High number of defects detected during User Acceptance Testing (UAT) resulting in a delay of final application go-live. The question therefore is whether there is a single and efficient solution available to organisations in managing their SAP related QA/Testing operations, inexpensively and efficiently? The concept of Test Factory and the offering under the business tag of NEM (New Engagement Models) is gaining traction across the globe. The Test Factory, also an alias for Managed Test Service or Testing Centre of Excellence, acts as an independent function in the SDLC; whilst supplanting the existing set of processes with benefits of a more agile, efficient and repeatable set of processes.<|endoftext|>In our next section, we explore the business needs of SAP enabled organisations to deploy a Test Factory model setup for their QA organisation.<|endoftext|>Today’s unrelenting economic and financial pressures, coupled with far too many internal and external factors (beyond the control of any organisation’s circle of influence), are compelling organisations to change and tailor their business processes accordingly. These changes, most often than not, necessitate a change in the very supporting business applications itself (example, SAP, Oracle, etc.) Acquisition of new business, opening of new facilities, introduction of new business line, consolidation of service lines, Regulatory Changes like changes in tax laws, changes in reporting needs, etc., incorporating these changes to SAP is fairly complex. Any update, even if major, moderate or minor, needs to be completely analysed from several dimensions keeping in mind that these are all large business applications. The result is that organisations end up managing projects involving a high number of interrelated or moving parts. Further, SAP applications span geographies and often need heavy customisations to suit local requirements (like government, language, etc.) and have multiple vendors running the applications. Every investment that an organisation makes in its IT systems is channelled towards ensuring a smooth running External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
Understanding the Business Need • Inconsistent in application test quality across teams and • Lack of usage of appropriate testing tools 2. Governance Challenges Organisations running SAP are continuously engaged in taking up large change programmes or rolling out applications to newer regions. A large change programme or rollout requires the QA function to constantly generate status reports and deal with various risks and issues.<|endoftext|>Absence of defined processes, metrics to track progress, risk management and inability to consolidate reports frequently, leads to a governance challenge with respect to managing decentralised QA teams. Another common problem encountered with decentralised QA teams is with the large amount of time consumed in assimilating and consolidating information for status reporting from various regional teams and resolution of risks and issues. Further, under the decentralised structure, teams lack adoption of uniform processes and hence there are bound to be differences in the content and structure of status reporting and the way risks are identified and dealt with. 3. High Cost of Testing / Maintenance Sound testing processes and deep business knowledge are pre-requisites to testing of SAP applications. We have also learnt earlier that a large amount of effort is spent in running the QA function for a SAP enabled organisation. Majority of organisations dedicate a large number of resources for testing of SAP releases. In addition to this, SAP testing involves testing some portions of business functionalities repeatedly and often decentralised teams lack the benefits associated with the reusability aspect. This can be cited as an additional reason for the inflated costs of SAP testing in a decentralised model.<|endoftext|>Project based QA teams primarily look at testing from a very narrow project point of view and often miss the holistic implication of the changes from a complete business landscape perspective. This leads to high efforts from the business users during UAT and large number of defects getting identified in the later stages of testing. In addition to this, the non-functional testing aspects such as Performance, Security etc., are also overlooked in the initial phases of testing leading to high amount of re-work and maintenance costs downstream.<|endoftext|>4. Lower Delivery Confidence and Higher Time-to-Market There is very low confidence on delivery of release considering the testing in the earlier phases is not really focussed on business knowledge. This results in a high percentage of defect identification in later stages of SDLC. In the absence of benchmark metrics, there is no opportunity to measure the test execution productivity, often leading to increased durations of testing cycles.<|endoftext|>While these may sound like age old problems and issues, these are indeed the common issues across organisations. These pitfalls are the reasons why organisations find themselves grappling with an expensive and a non-yielding QA function.<|endoftext|>Listed below are some of the common pitfalls encountered by SAP enabled organisations in running their QA functions - 1. Decentralised Testing This model of testing is usually prevalent in organisations which have undergone mergers or made acquisitions. Testing in such organisations is carried out using a decentralised model where no common testing processes and methodologies exist. Each Line of business (LOB) has its own processes and differences exist even within units of the same LOB. In most cases, testing is managed by the development team itself, being aligned directly to each project.<|endoftext|>This model does not provide clear delineation between the build and test functions. If there are any inefficiencies or delays in build, then the same is compensated for in the testing phase by either compressing the testing timelines or by moving forward with inadequate coverage of business scenarios. Individual teams often adopt the approach of testing with a self defined set of testing processes and scope limited to their project. The result is the lack of co-ordination when it comes to delivering together with other ongoing projects. This not only results in severe delays in the programme go-live, but also leads to a severe compromise of the quality and quantity of testing that is necessary.<|endoftext|>Most common limitations of this approach: • Lack of defined uniform testing processes and global governance based on metrics • Duplication of test effort • Non – conformance of testing timelines External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
So, What is a Test Factory? Having |
Continue # Infosys Whitepaper
looked at what is ailing SAP based organisations, setting up a Test Factory can be the most definitive solution available in the market currently. Test Factory is a centralised testing model that brings together people and infrastructure into a shared services function adopting standardising processes, effective usage of tools, high reuse and optimising resource utilisation in order to generate required benefits for the organisation.<|endoftext|>Let us broadly explore the solution that a Test Factory can provide – a. Test Factory acts as an independent function in the SDLC and resolves the very first ailment by having a clear delineation between the Build and Test functions of an organisation.<|endoftext|>b. Test Factory is setup as a centralised QA function which brings in uniform process adoption and an enterprise wide QA approach with easier governance. Test Factory is also setup with the attributes of a more agile, efficient and repeatable set of processes. Test Factory can also be operated in the new engagement model (NEM) format which helps in measuring the business value linked to the services offered, example; pricing is based on the work performed instead of traditional Time & Material models.<|endoftext|>Implementing a Test Factory The entire process of implementing a Test Factory involves 3 major phases – Solution Definition Solution Design Solution Implementation 3 Solution Definition Phase “Building the Case for Organisational Buy- in” One of the most essential starting points of the entire Test Factory setup involves assessing the existing organisational test processes, determining the maturity level of the processes and deriving the gaps observed. The solution definition phase involves arranging for one-on-one or group interview sessions with various stakeholders, in the existing ecosystem, and understanding the various pros and cons of the existing processes. Alternatively or additionally, a questionnaire pertaining to the respective areas of the stakeholders can be used to help document the same. For assessing the maturity, organisations are spoilt for choices with widely known Test Maturity models such as the TMMi, TMAP, TPI or the ITMM (Infosys Test Maturity Model).<|endoftext|>The ITMM is a well-blended model which builds upon the standard Test Maturity models and also adds further dimensions to its fabric in being able to evolve constantly to the changing business context. The assessment results show the current level of maturity of the organisation’s processes. It is of utmost importance at this stage to bring together the leadership team of the organisation and showcase the various process improvements and benefits of moving the organisation to the higher levels of maturity. On the basis of the agreed level of maturity to be targeted, the ITMM model allows for a continuous improvement process to be imbibed into the organisation. Once an agreement is reached, a roadmap is devised on how the solution is to be designed and implemented. Solution Design Phase “Structuring A Winning Solution” The Solution design phase is a core component in the Test Factory setup process and involves designing processes on three dimensions of the ITMM model – • Test Engineering Dimension, covering the focus areas of Requirements Gathering, Test Strategy, Testing tools, Test Data and Environment • Test Management dimension, covering the focus areas of Estimation, Test Planning, Communications, Defect Management and Knowledge Management • Test Governance dimension, covering the focus areas of Test Methodology, Test policy, Organisational structure and Test Metrics The most important key areas to focus would be to design the processes for - • Test Methodology Defining various types of testing to External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
be performed, estimation techniques, entry and exit criteria for each testing phase, testing environment set-up, operating model and various input and output artifacts • Test Governance Defining the governance structure and chalking out clear test roles and responsibilities • Test Factory Structure Defining the various communication paths within and outside the Test Factory • Metrics, KPIs and SLAs Defining the various testing related metrics and ensuring agreement on the various SLAs and KPIs for each role and processes in the Test Factory • Knowledge Management Framework Defining a centralised service to allow effortless and effective sharing of knowledge between teams, across knowledge assets In addition, teams may create, test data management processes, a catalogue of the testing services, non-functional test services methodology, Guidelines for various Testing tools and Testing Policies. The solution design phase is a highly collaborative process in which the design and delivery teams play equal roles. It involves both, fine tuning some of the current processes and completely revamping the rest. The implementation of the solution in the right manner, and with the right amount of calibration, can bring about bountiful benefits to the organisation in having a sound testing process.<|endoftext|>Solution Implementation Phase “Walk the Talk” Depending on the level of maturity that organisations choose to attain, this phase needs a good deal of time to be invested. The time taken to nurture the processes and imbibe them could range anywhere from six months up to two years, depending on the organisational buy-in and focus in implementing the same. Having a good amount of time on hand, organisations also have the option of choosing to implement the processes in either a staggered manner or with a big bang approach. In general, it is advisable that a staggered approach be chosen.<|endoftext|>In a staggered approach, the implementation team collaborates with the champion or manager of the Test factory prioritising the areas lacking basic maturity and identifying a pilot release in which the updated processes can be put to test. This allows the implementation team to lay out checkpoints where any anomalies can be corrected. At the end of the pilot implementation, a survey can be conducted with the stakeholders in determining the success and failures in the implementation. The lessons learnt at the pilot implementation stage are a crucial input to the next phase of implementation.<|endoftext|>It is important to look at some of the frequently encountered challenges associated with the solution implementation phase - • Aversion to change This often is a sticky issue with teams unwilling to adapt to new processes as it involves moving away from the comfort zone • Poorly adapted processes and communications This is an indication that the impacted teams are not aware of new processes and are not well trained • Handing over testing to Test Factory Traditional approach of testing by business users due to lack of business knowledge by testing team is one of the most challenging change management aspects to deal with It is therefore a task for both the implementation team and the Leadership team of the organisation in addressing these challenges / change management. The task of the implementation team lies in devising a thorough training plan for the various teams involved, designing user manuals and guidelines for any reference required to the new processes.<|endoftext|>On the other hand, the task of the leadership team is to put together a strong communication plan, listing the benefits that accrue to, both the impacted teams and the business benefit in adapting the changes. In certain situations, grievance redressal efforts and holding communication forums is a good way to engage with the teams.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
Figure: Test Maturity Model External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
Benefits of a Test Factory We have managed to explore the story of the Test Factory setup with the sound process framework forming the base, but the real icing on the cake is to see the benefits that accrue post setting up of the Test Factory.<|endoftext|>Test Factory brings about a set of both Qualitative and Quantitative benefits. Some of the Qualitative and hard hitting benefits include • Ensures high levels of repeatability, predictability and test coverage • Delivers on business requirements driven end-to-end testing This helps in identification of critical defects and requirement gaps which would have been typically identified only during UAT phase • Definition of key metrics Assists in tracking enabling effective governance at every stage of the programme • A reusable set of artifacts and test design, helping crash test planning and design timelines • Reduce the UAT phase saving on delivery timelines • Understanding of core business processes & building the core regression library • Effective usage of tools enabling complete traceability from requirements to test cases and defects during various phases of the project • Accelerated test automation helping in reduction of cycle time and execution costs • Providing platform to have more frequent releases annually – This will benefit the business and end users in moving away from the erstwhile lower frequency, which meant having to wait long for the release of the important Business/IT changes • One |
Continue # Infosys Whitepaper
-stop shop for various testing needs – Performance Testing, Security Testing, UAT Support etc.<|endoftext|>Quantitative benefits include • An estimated cost saving of 50% owing to reuse of the test design artifacts • An expected 40% reduction in execution cost with automation of test build and • Near zero defect leakage from System Integration testing to UAT and Go-Live, ensuring faster time to market and less Production downtime. Conclusion Setting up a Test Factory can give organisations a unique insight into how they can successfully tailor and reinvent the traditional onsite- offshore model for ensuring effective and comprehensive application testing. In addition to the cost benefits, with the adoption of the Test Factory approach, organisations emerge with an integrated and comprehensive SLA driven QA organisation with tightly knitted processes. One of the most enterprising advantages with a Test Factory setup is the ability to add further services associated with Testing and without having to largely tweak the underlying processes framework. This helps the organisation in quickly on-boarding and implementing testing skills and processes required for the supporting of new business initiatives oriented towards upcoming areas like Cloud, Mobility, Social Commerce, etc. The Test Factory model is a welcome addition to the plate of offerings by Service firms and is definitely a force to reckon with in the foreseeable future.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
About the
Authors Barry Cooper-Brown Test Manager, Diageo Barry is Test Manager with Diageo plc and is based in London. He has over 20 years of experience working with SAP across many disciplines from Basis to programme delivery. He has worked for over 14 years within the FMCG sector and has specialised in delivery of projects and programmes of SAP solutions. Currently Barry is the Test Factory Manager for Diageo responsible for the implementation and running of a Global Test Organisation across Multiple regions and SAP solutions.<|endoftext|>Chandur Ludhani Principal Consultant, Infosys Chandur is a principal consultant with the Retail, CPG and Life Sciences unit of Infosys and has over 15 years experience. He has experience on ERP Product development, Testing, Implementation and Support. As part of the testing engagements, he has helped clients in setting up the processes related to various types of testing services - Manual testing, Automation etc. leading to Testing Centers of excellence.<|endoftext|>Sailesh Chandrasekaran Senior Consultant, Infosys Sailesh is a senior consultant and has over 6 years of experience working for clients in Retail, Banking and Financial Services industry. He helps clients in assessing the maturity of their test organizations, improving their testing processes and transforming them into Centers of Excellence.<|endoftext|>© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: 5G testing holds the key to empower healthcare industry
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 4
---
WHITE PAPER 5G TESTING HOLDS THE KEY TO EMPOWER HEALTHCARE INDUSTRY Abstract COVID-19 has unleashed uncertainty on organizations, limiting their visibility and ability to strategize for the future. Technology, though, continues to evolve and has played a major role in helping deal with this crisis. When combined with AI and IoT, 5G becomes a potent technology across industries and domains, bringing unprecedented empowerment and superior customer service. This paper explores the impact of 5G on the healthcare industry. It also examines why 5G testing is important when supporting healthcare services and functions.<|endoftext|>
---
Page: 2 / 4
---
External Document © 2020 Infosys Limited About 5G Mobile communication has evolved rapidly with changing technologies. 5G represents the latest generation of cellular mobile communication, characterized by ultra-reliable low latency communication (URLLC), enhanced mobile broadband (eMBB) and massive machine type communication (mMTC). These capabilities enable data to be transferred at very high speeds with extremely low latency. As the Internet of Things ecosystem widens, 5G has the ability to support the capture of enormous volumes of data as well as provide computing power to process the same across billions of devices. This will lead to superior customer experience with unprecedented insights and capabilities, resulting in a new digital world. Fig 1: Defining characteristics of 5G Testing 5G in healthcare 5G will become the backbone of telemedicine or remote healthcare in the future. Remote healthcare involves frequent but distant monitoring of patients using a range of devices that capture vital parameters. This data must be continuously transmitted to doctors for real-time monitoring and assistance. It also includes video consultations for diagnosis and precision medicine prescription, especially for people in care homes. On a larger scale 5G can improve overall healthcare capabilities through innovations like robotic-assisted laser surgery where doctors can use machines to perform complex procedures with greater precision and flexibility.<|endoftext|>Remote healthcare requires devices as well as apps that enable real-time patient monitoring. AR/VR can provide an immersive user experience while artificial intelligence and machine learning (AI/ML) provide descriptive, prescriptive and, more importantly, predictive diagnostics. Each of these technologies are interconnected. They often converge to create use cases that provide next-gen healthcare services. To ensure their effectiveness, it is critical to check that these technologies are tested and certified. Thus, 5G testing is important to ensure that it can support these technologies. Let us examine some of the use cases of 5G testing in the healthcare industry. Use case 1: Healthcare devices Healthcare devices include wearables used by patients to monitor vital parameters like heart rate, speech analysis, body mass index, facial expressions, and more. Real- time video consultations involve using video devices for consultations, remote- assisted surgeries and training. Some of the key considerations for testing these two types of devices are: • High reliability to ensure uninterrupted service • Minimal to zero latency for critical medical procedures like remote-assisted surgeries z eMBB – enhanced mobile broadband mMTC – massive machine-type communica�on URLLC – ultra-reliable low-latency communica�on 5G drivers 1 million devices per Km2 10 Gbps peak data rate 1 ms latency 10X battery life for low power devices 5G One network – multiple Industries Mobility – 500 km/h high-speed railway 99.999% reliable 99.999% available
---
Page: 3 / 4
---
External Document © 2020 Infosys Limited • Ensuring that sensors monitoring vital parameters send real-time alerts to the patient’s/doctor’s handheld devices • Interoperability testing for many devices from different vendors • Compatibility testing to ensure that the devices can integrate with applications that are built to support real-time service, high-speed performance and superior user experience Use case 2: Healthcare apps Medical apps can help deliver a range of healthcare services covering: • Mental health – These aid in tracking psychological/behavioral patterns, substance addiction and emotional well-being • Patient medication – These apps provide medication reminders, maintain medication history, document online prescriptions, and more • Telemedicine – These apps aid diagnostics, real-time consultations, and monitoring of patient progress, to name a few • Wellness – These apps help maintain fitness and exercise regimes, diet prescriptions, monitoring of food intake, and meditation sessions Some of the key focus areas for testing healthcare apps are: • User interface and experience (UI/UX) testing to ensure enhanced customer experience • Non-functional requirements (NFR) testing for performance and security of apps that provide real-time patient data including large imaging files/ videos on a 5G network • Crowd testing of apps for varied user experience and localization • Device compatibility to support a vast number of devices that will run on 5G Use case 3: AR/VR AR/VR aims at creating real-time immersive experiences that can empower telemedicine and remote patient monitoring. It requires very high data rate transmissions and low latencies to deliver healthcare services covering: • Simulation of different conditions using sound, vision or 3D image rendering that can be transmitted from connected ambulances to operating rooms for advanced medical care • Real-time training for medical students • Treatment of patients with various phobias • Early detection of diseases • Various types of therapies to support physical and mental wellbeing Some of the key focus areas for testing include: • Checking that the networks can support real-time immersive experience through: • High speed data transfer rates • Ultra-low-level latency with no lag in the experience • High bandwidth • Checking that a range of hardware devices will work with a user’s smartphone to create a good mobile VR experience Use case 4: AI/ML It is evident that, in the near future, AI/ML will play a significant role across healthcare functions and services by helping diagnose illnesses earlier and prescribing the right treatment to patients. 5G will be critical in enabling healthcare functions that involve analyzing massive volumes of data. These healthcare functions include: • AI-based imaging diagnostics that provide doctors with insights about diseases and severity • ML-enabled digital biomarkers that analyze and enable early and accurate detection of Alzheimer’s and dementia • Assisting in clinical trials and research to observe patient responses to new drugs and their behavior patterns • Collecting large volumes of critical patient data from healthcare apps and devices to predict the occurrence of potential diseases through ML algorithms Some of the key focus areas for testing include: • Testing for cognitive features like speech recognition, image recognition, OCR, etc.<|endoftext|>• Implementing robotic process automation (RPA) for common functions like recurring medication prescriptions, appointments and periodic medical reports. • Big data testing for both structured and unstructured data. As large volumes of data are transmitted and received from every device, app and equipment, it will need to be ingested and stored to derive insights that aid in next-gen predictive and preventive healthcare Testing on cloud In addition to the above test focus areas, cloud testing will be a common test function for the entire healthcare ecosystem.<|endoftext|>Cloud can support the core network through network functions virtualization (NFV). It can also enable software-defined networking (SDN), a must-have capability for 5G. Moreover, cloud will also host the various healthcare devices and equipment in the healthcare ecosystem. Thus, cloud testing will be critical to ensure smooth performance including functional, non- functional and network testing.<|endoftext|>These use cases provide an overview of the impact of 5G in healthcare with a focus on hospitalization, preventive healthcare, device monitoring, and patient wellbeing. There are other areas such as pharmaceuticals, insurance, compliance, etc., that will also leverage 5G to deliver their services, creating a connected healthcare ecosystem.<|endoftext|>
---
Page: 4 / 4
---
© 2020 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may |
Continue # Infosys Whitepaper
be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected References https://www.cnet.com/news/covid-19-has-pushed-health-care-online-and-5g-will-make-it-better/ https://www.uschamber.com/series/above-the-fold/how-innovation-accelerating-meet-coronavirus-challenges https://www.pwc.co.uk/communications/assets/5g-healthcare.pdf https://www.ericsson.com/en/blog/2018/6/why-compatibility-and-5g-interoperability-are-crucial-for-success https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4029126/ https://www.scnsoft.com/healthcare/mobile/patient-apps https://pharmaphorum.com/digital/four-ways-ai-and-machine-learning-will-transform-healthcare-in-2020/ Conclusion 5G is emerging as the backbone of future healthcare, catering to a wide range of healthcare services from ambulances to operating theatres, apps to diagnostic equipment, physical illnesses to mental wellbeing, and more. Technologies like cloud, AR/VR and AI/ML will play a key role in driving a technological revolution within healthcare. As technologies and equipment for monitoring and diagnostics become more sophisticated, 5G will act as the high-speed expressway, allowing devices to exchange data at much faster speeds. 5G will support a variety of remote healthcare use cases such as early disease detection, at-home patient monitoring, precision surgeries, and distant medical training, to name a few. Thus, 5G testing will be crucial to ensure that it unlocks the potential of modern technologies in healthcare.<|endoftext|>About the
Author Sumanth Dakshinamurthy Principal Consultant, Infosys Validation Solutions
***
|
# Infosys Whitepaper
Title: Testing IoT Applications - A Perspective
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 4
---
VIEW POINT TESTING IOT APPLICATIONS - A PERSPECTIVE - Manjunatha Gurulingaiah Kukkuru, Principal Research Analyst
---
Page: 2 / 4
---
Introduction The Internet of Things (IoT) is a network of physical objects (devices, vehicles, buildings, and other items) that are embedded with electronics, software, sensors, and network connectivity to collect and exchange data.<|endoftext|>According to a recent report by McKinsey, around 30 billion objects may be connected through IoT by 2020. Enterprises are adopting IoT solutions for the benefits they offer; such as, optimization in operations, reduction in costs, and improvement in efficiency. The development and adoption of IoT is being driven by multiple factors, including easily available low-cost sensors, increase in bandwidth and processing power, wide-spread usage of smartphones, availability of big data analysis tools, and scalability of internet protocol version 6. Organizations are now starting to focus on external benefits such as generating revenues from IoT-enabled products, services, and customer experiences.<|endoftext|>IoT: A web of interconnected layers The following figure indicates a reference architecture for IoT, comprising of multiple layers built on top of each other to create industry- specific solutions. The components in each layer include devices, protocols, and modules that need to work in sync in order to effectively convert data to information, and subsequently to insights. Resource Efceincy Device management Policy LLRP MODBUS Zigbee BACNet ZPL SNMP Location Provisioning Remote Mgmt Access mgmt Activation Diagnostics Report Complex Event Processor Data Transformation Wi-Fi LAN WAN Satellite Cellular Que Listner Protocol Adapter Cmd Interpreter Data Mangement Ticketing Workfow Maps Command Center Communication Presence Analtics and Machine Learning Big Data Storage Database Location / Geo Fence User Mgmt Machine Learning Analytics Class of Applications Data Processing Layer Domains Base Stations / Readers Data Ingest & Transfromation Layer Devices Tracking & Tracing Site & Employee Safety Remote Monitoring Remote Visualization eTraceability Process Visibility & Automation Predictive Analtics Risk, Fraud & Warranty Analytics Smart Grid Manufacturing Logistics Agriculture Buildings Healthcare Retail Residences Oil Mining Pharma Connected Machines Barcode Sensors RFID Smart Phone Actuators Drones GPS Wearables Smart Meters • Device layer: Consists of various devices like sensors, wearables, smart meters, radio frequency identification (RFID) tags, smartphones, drones, etc. With such a diverse set of devices, a huge set of standard and custom communication protocols — including ZigBee, BACnet, LLRP, and Modbus — are implemented. • Data ingestion and transformation layer: Data from the device layer is transformed through different protocols to a standard format for further processing by the data processing layer. This data could be from sensors, actuators, wearables, RFID, etc., received via TCP/IP socket communication or messaging queues like MQTT, AMQP, CoAP, DDS, Kafka, and HTTP / HTTPS over Rest API. • Data processing layer: With data available from millions of devices, performing image, preventive, and predictive analytics on batch-data provides meaningful insights. Modules like a ‘complex event processor’ enable the analysis of transformed data by performing real-time streaming analytics — such as filtering, correlation, pattern- matching, etc. Additionally, multiple APIs for geo-maps, reporting, ticketing, device provisioning, communication, and various other modules aid in quick creation of dashboards. • Applications layer: With the availability of such rich datasets from a multitude of devices, a gamut of applications can be developed for resource efficiency, tracking and tracing, remote monitoring, predictive analytics, process visibility and automation, etc. and can also be applied to different industries and segments.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 3 / 4
---
Unique characteristics and requirements of IoT systems Compared to other applications, IoT applications are characterized by several unique factors, such as: • Combination of hardware, sensors, connectors, gateways, and application software in a single system • Real-time stream analytics / complex event processing • Support for data volume, velocity, variety, and veracity • Visualization of large-scale data Challenges that thwart IoT testing These characteristics consequently present a unique set of challenges when it comes to testing IoT applications. The primary challenges include: • Dynamic environment: Unlike application testing performed in a defined environment, IoT has a very dynamic environment with millions of sensors and different devices in conjunction with intelligent software • Real-time complexity: IoT applications can have multiple, real-time scenarios and its use cases are extremely complex • Scalability of the system: Creating a test environment to assess functionality along with scalability and reliability is challenging Apart from the above challenges, there exist several factors that present operational challenges: • Related subsystems and components owned by third-party units • Complex set of use cases to create test cases and data • Hardware quality and accuracy • Security and privacy issues • Safety concerns Types of IoT testing The complex architecture of IoT systems and their unique characteristics mandate various types of tests across all system components. In order to ensure that the scalability, performance, and security of IoT applications is up to the mark, the following types of tests are recommended: Edge testing Several emerging, industrial IoT applications require coordinated, real- time analytics at the ’edge’ of a network, using algorithms that require a scale of computation and data volume / velocity. However, the networks connecting these edge devices often fail to provide sufficient capability, bandwidth, and reliability. Thus, edge testing is very essential for any IoT application. Protocol and device interoperability testing IoT communication protocol and device interoperability testing involves assessing the ability to seamlessly interoperate protocols and devices across different standards and specifications. Security and privacy testing This includes security aspects like data protection, device identity authentication, encryption / decryption, and trust in cloud computing. Network impact testing Network impact testing involves measuring the qualitative and quantitative performance of a deployed IoT application in real network conditions. This can include testing IoT devices for a combination of network size, topology, and environment conditions. Performance and real-time testing This covers complex aspects like timing analysis, load testing, real-time stream analytics, and time-bound outputs, under the extremes of data volume, velocity, variety, and veracity. End user application testing Includes the testing of all functional and non-functional use cases of an IoT application, which also includes user experience and usability testing. External Document © 2018 Infosys Limited
---
Page: 4 / 4
---
Infosys IoT Validation solution Infosys has developed a comprehensive quality assurance (QA) strategy to handle the unique requirements and challenges associated with validating IoT applications. The Infosys IoT Validation solution enables testing with a combination of actual devices, tools, and frameworks. In addition, the Infosys IoT Test Framework provides all the capabilities required to perform functional validation, load simulation, and security verification. It can easily integrate with various IoT protocols and platforms, thus providing interoperability. This is just a glimpse of our capabilities, as we have various tools and solutions that can be leveraged to perform end-to-end testing of IoT solutions. To find out more about our IoT services, download the IoT testing flyer here. © 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys POV
Title: PowerPoint Presentation
Author: Chloe Hibbert
Format: PDF 1.7
---
Page: 1 / 30
---
An Infosys Consulting Perspective Led by Olu Adegoke Consulting@Infosys.com | InfosysConsultingInsights.com The Future of Telco a four-part series
---
Page: 2 / 30
---
Beyond connectivity | © 2023 Infosys Consulting THE FUTURE OF TELCO In this series, experts outline the four key trends influencing and driving telecom companies to the future. Contents Part 1: Beyond connectivity: How communication service providers can monetize emerging B2B growth opportunities By Ravi Jayanthi Ravi Jayanthi explains how telcos can best monetize emerging growth opportunities in the enterprise segment to remain competitive. Part 2: The new telecom operating model: Exponential growth opportunities By Alastair Birt Alastair Birt outlines why telecom companies need to creatively dismantle the old ways and rise from the ashes stronger, wise, and ready to seize the future within their grasp. Part 3: Telecoms: Why talent is the next frontier for competitive advantage By Mick Burn and Stanislava Gaspar (née Stoyanova) Mick Burn and Stanislava Gaspar explain how radical transformation will affect HR strategy and why betting on people is essential for success. Part 4: Getting value from AI&A in the telecom industry By James Thornhill It’s clear that the digital future will be driven by Artificial Intelligence and Automation (AI&A) – especially in the telecom industry. James Thornhill outlines AI&A opportunities and why CSPs should ensure they have the right strategic initiatives to grow. Contributors: Olu Adegoke and Gaurav Kapoor Editor: Vica Granville 2
---
Page: 3 / 30
---
---
Page: 4 / 30
---
BEYOND CONNECTIVITY: HOW COMMUNICATION SERVICE PROVIDERS CAN MONETIZE B2B GROWTH OPPORTUNITIES Beyond connectivity | © 2023 Infosys Consulting 4 Companies looking to operate normally today are facing insurmountable obstacles. High geopolitical risks, economic uncertainty, labor shortages, and the realignment of supply chains have forced industries to adapt and readjust almost immediately post-pandemic.<|endoftext|>The digital transformation and workforce reinvention required are reshaping demand for information and communications technology (ICT) solutions and are utilizing the rapid evolution of advanced technologies. According to Ericsson and Arthur D. Little, these new digitalization opportunities are providing telecom companies with a significant opportunity in the B2B market to capitalize on the 35% topline growth potential. But what is needed to grow the B2B telecommunication market and what hurdles do telcos need to overcome to grasp this potential?
---
Page: 5 / 30
---
Understanding the potential Before outlining how telcos can succeed in the B2B market, it’s important to understand where the opportunity is coming from. Due to turbulent market forces, many organizations are already at various stages of: • Developing new technological solutions • Improving service delivery • Increasing operational efficiency • Reducing cost • Gaining competitive advantage • Meeting rising customer expectations Such digital transformations require ICT solutions that provide integrated connectivity, security for digitally connected devices, data, and applications. In addition, the rapid evolution of cloud, artificial intelligence (AI), machine learning (ML), and automation technologies are bringing incredible value to businesses. Companies can now aim to provide employees and customers with secure, high speed, reliable, low latency mobile networks with edge computing capabilities. However, many telecom companies are failing to seize the opportunity presented before them. They’re losing out to forward thinking hyperscale cloud providers (HCP) by relying on outdated systems, processes, and operating models. What hurdles will telcos need to overcome? To ensure future growth, leaders will need to repackage the intrinsic value of the network with innovative ways to bring extrinsic value to customers. As such, organizations must expand the communication service providers (CSP) operating model from a network to a flexible end-to- end business platform provider. This will expand their role in the value-chain. Beyond connectivity | © 2023 Infosys Consulting 5
---
Page: 6 / 30
---
But to do that, telecom companies will need to must overcome several obstacles: • The lack of industry domain knowledge that’s essential for innovation at the edge and value-based selling • The lack of necessary relationships to influence enterprises on their digitalization journey. Many telcos are, therefore, missing out on business-level (vs. connectivity-level) conversations to create relevancy • The absence of skills necessary to implement and adopt use cases for enterprises. Adoption is made more complex as there’s a lack of subject matter experts focused on domains and customer ecosystems • Many telcos aren’t replicating use cases across enterprises, and therefore fail to amortize the transformation investment • There’s an uneven distribution of spectrum assets among CSPs that is hindering large-scale deployment of 5G, and thereby hampering use case adoption • Companies are failing to embrace open- source technologies. This prevents them from accelerating innovation and reducing costs • Competition from hyperscalers. Their ability to spend the same on telecom infrastructure services as Tier-1 CSPs has enabled them to expand their involvement in the telecom industry/value chain. This includes edge computing and private wireless networks How can telecoms capitalize on the B2B market opportunity and overcome these challenges? Beyond connectivity | © 2023 Infosys Consulting To cater to new segments and opportunity areas telcos must expand their role. Organizations must transform from simply being a network provider to becoming a service enabler and creator. This is a complex undertaking to do alone. To be successful, CSPs need to show considerable flexibility in how they deploy innovative business models. This includes working with HCPs and service integrators (SIs) to secure a stake in the B2B market. Technology works better when it’s built together with partners. Especially, when dealing with a complex ecosystem across a gamut of customer segments, as well as growth areas with varying business outcomes, capabilities, and systems of differentiation. Establishing a rich partnership ecosystem will help both market leaders and aspirants to build a complete portfolio and confidently embark on a journey of scale.<|endoftext|>6
---
Page: 7 / 30
---
Overcoming the various challenges to transform current ways of working isn’t an easy undertaking. But there are four crucial factors that should be considered in your service strategy and go-to-market (GTM) playbook to help. These are: Go-to-market playbook for telecoms to capitalize on the B2B market opportunity 7 Beyond connectivity | © 2023 Infosys Consulting B2B2X revenue Create extrinsic value with an outside-in approach to enterpirse customers. Innovation/GTM Build a strong partner ecosystem and focus on value-based selling. Adoption Simplify use cases for customer adoption and integrate (E2E) their ecosystem within your business. Scale Industrialize network engineering and operations – essential to scale at large.
---
Page: 8 / 30
---
Key takeaways Telcos can no longer justify network investments for cost efficiency and competitive parity alone. They must focus on creating a path for growth, leveraging technology to shape problems that drive innovation and differentiation. They need to bring what’s next to life and start thinking outside of the box, building a marketplace that will serve growth. That’s the story of scale that telecom companies need to prioritize in partnership with SIs and HCPs. With boots on the ground, such partners are key to driving growth both through GTM and building out intellectual property.<|endoftext|>This will be a significant undertaking. As such, it must be executed in waves to align with market readiness. It will require CXOs to commit to innovation, industry collaboration, and long-term investments. With a growth potential of 35% to the topline through network enabled digitalization, the potential returns are worth the risk.<|endoftext|>8 Beyond connectivity | © 2023 Infosys Consulting
---
Page: 9 / 30
---
---
Page: 10 / 30
---
THE NEW TELECOM OPERATING MODEL: EXPONENTIAL GROWTH OPPORTUNITIES Most telecom companies claim to be well on their way to making the shift |