text
stringlengths 161
10.8k
|
---|
# Infosys Whitepaper
Title: Achieve complete automation with artificial intelligence and machine learning
Author: Infosys Limited
Format: PDF 1.6
---
Page: 1 / 8
---
WHITE PAPER ACHIEVE COMPLETE AUTOMATION WITH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING {{ img-description: engineer scribbling code on computer, in the style of acidic and luminous colors, bokeh panorama, #vfxfriday, group material, farm security administration aesthetics, dark colors, bugcore (floating left in verticle rectanlge) }} Abstract As agile models become more prominent in software development, testing teams must shift from slow manual testing to automated validation models that accelerate the time to market. Currently, automation test suites are valid only for some releases, placing greater pressure on testing teams to revamp test suites, so they can keep pace with frequent change requests. To address these challenges, artificial intelligence and machine learning (AI/ML) are emerging as viable alternatives to traditional automation test suites. This paper examines the existing challenges of traditional testing automation. It also discusses five use-cases and solutions to explain how AI/ML can resolve these challenges while providing complete and intelligent automation with little or no human intervention, enabling testing teams to become truly agile.<|endoftext|>
---
Page: 2 / 8
---
Industry reports reveal that many enterprise initiatives aiming to completely automate quality assurance (QA) fail due to various reasons resulting in low motivation to adopt automation. It is, in fact, the challenges involved in automating QA that have prevented its evolution into a complete automation model. Despite the challenges, automation continues to be a popular initiative in today’s digital world. Testing communities agree that a majority of validation processes are repetitive. While traditional automation typically checks whether things work as they are supposed to, the advent of new technologies like artificial intelligence and machine learning (AI/ML) can support the evolution of QA into a completely automated model that requires minimal or no human intervention.<|endoftext|>Pain points of Complete Automation Introduction Let us look at the most pertinent problems that lead to low automation statistics: • Frequent requirement changes – While most applications are fluid and revised constantly, the corresponding automation test suite is not. Keeping up with changing requirements manually impedes complete automation. Moreover, maintaining automated test suites becomes increasingly complicated over time, particularly if there are frequent changes in the application under test (AUT) • Mere scripting is not automation – Testing teams must evolve beyond traditional test automation that involves frequent manual script-writing.<|endoftext|>• Inability to utilize reusable assets – It is possible to identify reusable components only after a few iterations of test release cycles. However, modularizing these in a manner that can be reused everywhere is a grueling task.<|endoftext|>• Talent scarcity – Finding software development engineers in test (SDETs) with the right mix of technical skills and QA mindset is a significant challenge QA teams today are looking for alternatives to the current slow and manual process of creating test scripts using existing methodologies. It is evident that intelligent automation (automation leveraging AI/ML) is the need of the hour.<|endoftext|>{{ img-description: an audio engineer working in his studio, in the style of thomas cole, uhd image, aaron douglas, jcore, lightbox, justin bua, free-associative }} External Document © 2020 Infosys Limited
---
Page: 3 / 8
---
A. Automation test suite creation How can enterprises achieve complete automation in testing? Use cases and solutions for intelligent test automation The key to achieving complete automation lies in using AI/ML as an automation lever instead of relegating it to scripting. Optimizing manual test cases using AI/ML is a good start. Helping the application self- learn and identify test suites with reusable Use case 1: Optimizing a manual test suite Testing teams typically have a large set of manual test cases for regression testing, which are written by many people over a period of time. Consequently, this leads to overlapping cases. This increases the burden on automation experts when creating the automation test suite. Moreover, as the test case suite grows larger, it becomes difficult to find unique test cases, leading to increased execution effort and cost.<|endoftext|>Solution 1: Use a clustering approach to reduce effort and duplication A clustering approach can be used to group similar manual test cases. This helps teams easily recognize identical test cases, thereby reducing the size of the regression suite without the risk of missed coverage. During automation, only the most optimized test cases are considered, with significant effort reduction and eliminating duplicates.<|endoftext|>Use case 2: Converting manual test cases into automated test scripts Test cases are recorded or written manually in different formats based on the software test lifecycle (STLC) model, which can be either agile or waterfall. Sometimes, testers can record audio test cases instead of typing those out. They also use browser- based recorders to capture screen actions while testing.<|endoftext|>Solution 2: Use natural language processing (NLP) In the above use case, the execution steps and scenarios are clearly defined, assets can be more advanced utilities for automated test suite creation.<|endoftext|>Leveraging AI in test suite automation falls into two main categories – ‘automation test suite creation using various inputs’ after which the tester interprets the test cases, designs an automation framework and writes automation scripts. This entire process consumes an enormous amount of time and effort. With natural language processing (NLP) and pattern identification, manual test cases can be transformed into ready-to-execute automation scripts and, furthermore, reusable business process components can be easily identified. This occurs in three simple steps: • Read – Using NLP to convert text into the automation suite. • Review – Reviewing the automation suite generated.<|endoftext|>• Reuse – Using partially supervised ML techniques and pattern discovery algorithms to identify and modularize reusable components that can be plugged in anywhere, anytime and for any relevant scenario.<|endoftext|>and ‘automation test suite repository maintenance’. The following section discusses various use cases for intelligent automation solutions under these two categories which address the challenges of end-to-end automation.<|endoftext|>All these steps can be implemented in a tool-agnostic manner until the penultimate stage. Testers can review the steps and add data and verification points that are learnt by the system. Eventually, these steps and verification points are used to generate automation scripts for the tool chosen by the test engineer (Selenium, UFT or Protractor) only at the final step. In this use case, AI/ML along with a tool- agnostic framework helps automatically identify tool-agnostic automation steps and reusable business process components (BPCs). The automation test suite thus created is well-structured with easy maintainability, reusability and traceability for all components. The solution slashes the planning and scripting effort when compared to traditional mechanisms.<|endoftext|>External Document © 2020 Infosys Limited {{ img-description: two people in a room and one of them has a virtual reality headset on, in the style of dark emerald and red, academic precision, textural surface treatments, barbizon school, functional design, ue5, candid }}
---
Page: 4 / 8
---
Use case 3: Achieving intelligent DevOps Software projects following the DevOps model usually have a continuous integration (CI) pipeline. This is often enabled with auto-deploy options, test data and API logs with their respective requests-response data or application logs from the DevOps environment. While unit tests are available by default, building integration test cases requires additional effort. Solution 3: Use AI/ML in DevOps automation By leveraging AI/ML, DevOps systems gain analytics capabilities (becoming ‘intelligent DevOps’) in addition to transforming manual cases to automated scripts.<|endoftext|>Fig 1: Achieving complete automation on a DevOps model with AI/ML Test Automation in DevOps – Quality at every step As shown in Fig 1, the key steps for automating a DevOps model are: • Create virtual service tests based on request-response data logs that can auto update/self-heal based on changes in the AUT.<|endoftext|>• Continuously use diagnostic analytics to mine massive data generated to proactively identify failures in infrastructure/code and suggest recovery techniques.<|endoftext|>• Leverage the ability to analyze report files and identify failure causes/ sequences or reusable business process components through pattern recognition.<|endoftext|>• Enable automated script generation in the tool-of- |
Continue # Infosys Whitepaper
choice using a tool-agnostic library.<|endoftext|>• Analyze past data and patterns to dynamically decide what tests must be run for different teams and products for subsequent application builds.<|endoftext|>• Correlate production log data with past code change data to determine the risk levels of failure in different application modules, thereby optimizing the DevOps process.<|endoftext|>External Document © 2020 Infosys Limited {{ img-description: man working with two computers to code a software, in the style of dark brown and light azure, soft edges and blurred details, handheld, poured, studyplace, furaffinity, coded patterns }}
---
Page: 5 / 8
---
Fig 2: How reinforcement algorithms in testing identify critical flows in an application B. Automation test suite maintenance Use case 4: Identifying the most critical paths in an AUT When faced with short regression cycles or ad hoc testing requests, QA teams are expected to cover only the most important scenarios. They rely on system knowledge or instinct to identify critical scenarios, which is neither logical nor prudent from a test coverage perspective. Thus, the inability to logically and accurately identify important test scenarios from the automation test suite is a major challenge, especially when short timelines are involved.<|endoftext|>Solution 4: Use deep learning to determine critical application flows If the AUT is ready for validation (even when no test cases are available in any form), a tester can use a reinforcement learning-based system to identify the critical paths in an AUT. External Document © 2020 Infosys Limited {{ img-description: person and woman pointing on computers while using software programs, in the style of code-based creations, object portraiture specialist, light amber and navy, iso 200, flowing brushwork, embroidery, polished craftsmanship (bottom full width) }}
---
Page: 6 / 8
---
{{ img-description: person and woman pointing on computers while using software programs, in the style of code-based creations, object portraiture specialist, light amber and navy, iso 200, flowing brushwork, embroidery, polished craftsmanship }} Use case 5: Reworking the test regression suite due to frequent changes in AUT If a testing team conducts only selective updates and does not update the automation test suite completely, then the whole regression suite becomes unwieldly. An AI-based solution to maintain the automation and regression test suite is useful in the face of ambiguous requirements, frequent changes in AUTs The five use cases and solutions discussed above can be readily implemented to immediately enhance an enterprise’s test suite automation process, no matter their stage of test maturity.<|endoftext|>and short testing cycles, all of which leave little scope for timely test suite updates.<|endoftext|>Solution 5: Deploy a self-healing solution for easier test suite maintenance Maintaining the automation test suite to keep up with changing requirements, releases and application modifications requires substantial effort and time. A self-healing/self-adjusting automation test suite maintenance solution follows a series of steps to address this challenge. These steps are identifying changes between releases in an AUT, assessing impact, automatically updating test scripts, and publishing regular reports. As shown in Fig 3, such a solution can identify changes in an AUT for the current release, pinpoint the impacted test scripts and recommend changes to be implemented in the automation test suite.<|endoftext|>AUTtest suite External Document © 2020 Infosys Limited
---
Page: 7 / 8
---
Conclusion Complete automation, i.e., an end-to-end test automation solution that requires minimal or no human intervention, is the goal of QA organizations. To do this, QA teams should stop viewing automation test suites as static entities and start considering these as dynamic ones with a constant influx of changes and design solutions. New technologies like AI/ML can help QA organizations adopt end- to-end automation models. For instance, AI can drive core testing whether this involves maintaining test scripts, creating automation test suites, optimizing test cases, or converting test cases to automated ones. AI can also help identify components for reusability and self-healing when required, thereby slashing cost, time and effort. As agile and DevOps become a mandate for software development, QA teams must move beyond manual testing and traditional automation strategies towards AI/ML-based testing in order to proactively improve software quality and support self-healing test automation.<|endoftext|>External Document © 2020 Infosys Limited {{ img-description: group of men and women in an office, raising their hands, in the style of spatial play, mix of masculine and feminine elements, smilecore, object-oriented, strong emotional impact, working-class empathy, curved mirrors }}
---
Page: 8 / 8
---
About the
Authors © 2020 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected Suman Boddu, Senior Technical Manager, Infosys Akanksha Rajendra Singh, Consultant, Infosys
***
|
# Infosys Whitepaper
Title: Enabling QA through Anaplan Model testing
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
WHITE PAPER {{ img-description: a blue colored image of a large data room, in the style of systems art, richly layered, colorized, contrasting (floating left ) }} ENABLING QA THROUGH ANAPLAN MODEL TESTING Abstract Anaplan is a cloud-based platform that can create various business models to meet different organizational planning needs. However, the test strategy for Anaplan varies depending on the application platform, cross-track dependencies and the type of testing. This white paper examines the key best-practices that will help organizations benefit from seamless planning through successful Anaplan testing.<|endoftext|>- Mangala Jagadish Rao - Harshada Nayan Tendulkar
---
Page: 2 / 8
---
{{ img-description: business man and technology icons scribbled on paper, in the style of light indigo and indigo, printed matter, interactive experiences, polished concrete, high-angle, caffenol developing, website (fade in right background) }} External Document © 2018 Infosys Limited What is Anaplan? Anaplan is a cloud-based operational planning and business performance platform that allows organizations to analyze, model, plan, forecast, and report on business performance. Once an enterprise customer uploads data into Anaplan cloud, business users can instantly organize and analyze disparate sets of enterprise data across different business areas such as finance, human resources, sales, forecasting, etc. The Anaplan platform provides users with a familiar Excel-style functionality that they can use to make data-driven decisions, which otherwise would require a data expert. Anaplan also includes modules for workforce planning, quota planning, commission calculation, project planning, demand planning, budgeting, forecasting, financial consolidation, and profitability modelling.<|endoftext|>
---
Page: 3 / 8
---
External Document © 2018 Infosys Limited External Document © 2018 Infosys Limited 7 best-practices for efficient Anaplan testing 1. Understand the domain As Anaplan is a platform used for enterprise-level sales planning across various business functions, its actual users will be organizational-level planners who have an in-depth understanding of their domains. Thus, to certify the quality of the Anaplan model, QA personnel must adopt the perspective of end-users who may Fig 2: Representation of Anaplan data integration be heads of sales compensation or sales operations departments, sales managers, sales agents, etc.<|endoftext|>2. Track Anaplan data entry points One of the features that makes Anaplan a popular choice is its provision of a wide range of in-built and third-party data integration points, which can be used to easily load disparate data sources into a single model. For most business users, data resides across many granular levels and it cannot be handled reliably with traditional Excel spreadsheets. Anaplan offers a scalable option that replaces Excel spreadsheets with a cloud-based platform to extract, load and transform data at any granular level from different complex systems while ensuring data integrity.<|endoftext|>It is essential for QA teams to understand the various up-stream systems from where data gets extracted, transformed and loaded into the Anaplan models. Such data also needs to be checked for integrity.<|endoftext|> {{ img-description: a person in a dark white shirt uses their phone to see an article about a data center on a laptop, in the style of light sky-blue and gray, historical illustrations, datamosh, light green and gray, streamlined forms, uhd image, functionality emphasis }}
---
Page: 4 / 8
---
External Document © 2018 Infosys Limited 3. Ensure quality test data management The quality of test data is a deciding factor for testing coverage and completeness. Hence, the right combination of test data for QA will optimize testing effort and cost. Since Anaplan models cater to the financial, sales, marketing, and forecasting domains of an organization, it is essential to verify the accuracy of the underlying data. Failure to ensure this could result in steep losses amounting to millions of dollars for the organization. Thus, it is recommended that the QA teams dedicate a complete sprint/cycle to test the accuracy of data being ingested by the models.<|endoftext|>The optimal test data management strategy for testing data in an Anaplan model involves two steps. These are: • Reconciling data between the database and data hub – Data from the source database or data files that is received from the business teams of upstream production systems should be reconciled with the data hub. This will ensure that the data from the source is loaded correctly into the hub. In cases of hierarchical data, it is important to verify that data is accurately rolled up and that periodic refresh jobs are validated to ensure that only the latest data is sent to the hub according to the schedule. • Loading correct data from the hub into the model – Once the correct data is loaded into the hub, testing moves on to validate whether the correct data is loaded from the hub to the actual Anaplan model. This will ensure that the right modules are referenced from the hub in order to select the right data set. It also helps validate the formulas used on the hub data to generate derived data. For every model that is being tested, it is important to first identify the core hierarchy list that forms the crux of the model and ensure that the data is validated across every level of the hierarchy, in addition to validating the roll-up of numbers through the hierarchy or cascade of numbers down the hierarchy as needed.<|endoftext|>{{ img-description: people standing near computers with clouds outlined, in the style of commercial imagery, emphasis on the process, light teal and black, high-key lighting, uhd image, light orange and white (bottom) }} External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
External Document © 2018 Infosys Limited 4. Validate the cornerstones It is a good practice to optimize test quality to cover cornerstone business scenarios that may be missed earlier. Some recommendations for testing Anaplan models are listed below: • Monitor the periodic data refresh schedules for different list hierarchies used in the model and validate that the data is refreshed on time with the latest production hierarchy • As every model may have different user roles with selective access to dashboards, ensure that functional test cases with the correctly configured user roles are included. This upholds the security of the model since some dashboards or hierarchical data should be made visible only to certain pre-defined user roles or access levels • The Anaplan model involves many data fields, some of which are derived from others based on one or more business conditions. Thus, it is advisable to include test scenarios around conditions such as testing warning messages that should be displayed, or testing whether cells that need to be conditionally formatted based on user inputs that either pass or fail the business conditions 5. Use automation tools for faster validation IT methodologies are evolving from waterfall to agile to DevOps models, leading to higher releases per year, shorter implementation cycle times and faster time-to-market. Thus, the test strategy of QA teams too should evolve from the waterfall model. Incorporating automation in the test strategy helps keep pace with shorter cycle times without compromising on the test quality. Anaplan implementation programs leverage agile and possess a wide range of testing requirements for data, integration and end-to-end testing, thereby ensuring that testing is completed on time. However, there are times when it becomes challenging to deliver maximum test coverage within each sprint because testing Anaplan involves testing each scenario with multiple datasets. Thus, it is useful to explore options for automating Anaplan model testing to minimize delays caused during sprint testing and ensure timely test delivery. Some of these options include using simple Excel-based formula worksheets and Excel macros along with open source automation tools such as Selenium integrated with Jenkins. This will help generate automated scripts that can be run periodically to validate certain functionalities in the Anaplan model for multiple datasets. These options are further explained below: Reusable Excel worksheets – This involves a one-time activity to recreate the dashboard forms into simple tables in Excel worksheets. The data fields in Anaplan can be classified into 3 types: • Fields that require user input • Fields where data is populated from various data sources within Anaplan or other systems • Fields where data gets derived based on defined calculation logic. Here, the formula used |
Continue # Infosys Whitepaper
to derive the data value is embedded into the Excel cell such that the derived value gets automatically calculated after entering data in the first two kinds of fields Using such worksheets accelerates test validation and promotes reusability of the same Excel sheet to validate calculation accuracy for multiple data sets, which is important to maintain the test quality : • Excel macros – To test the relevant formula or calculation logic, replicate the Anaplan dashboard using excel macros. This macro can be reused for multiple data sets, thereby accelerating and enhancing test coverage • Open source tools – Open source tools like Selenium can be used to create automation scripts either for a specific functionality within the model or for a specific dashboard. However, using Selenium for Anaplan automation comes with certain challenges such as: › Automating the end-to-end scenario may not be feasible since Anaplan requires switching between multiple user roles to complete the end-to-end flow › The Anaplan application undergoes frequent changes from the Anaplan platform while changes in the model build require constant changes in the scripts › Some data validation in Anaplan may require referencing other data sets and applying calculation logic, which may make the automation code very complex, causing delays in running the script 6. Ensure thorough data validation Anaplan can provision a secure platform to perform strategic planning across various domains. It provides flexibility and consistency when handling complex information from distinct data sources from various departments within the same organization. Identifying the correct underlying data is crucial for successful quality assurance of business processes using the Anaplan model. There are two key approaches when testing data in Anaplan. These include: • User access level – Business process requirements in some Anaplan models allow only end-users with certain access levels to view data and use it for planning. For instance, a multi-region sales planning model will include sales planners from different sales regions as end users. However, users should be allowed to only view the sales, revenue and other KPIs pertaining to their region as it would be a security breach to disclose the KPIs of other sales regions • Accuracy of data integrated from various systems – When the data being tested pertains to dollar amounts, for example, sales revenue, it is critical to have a thorough reconciliation of data against the source because a variation of a few dollars could lead to larger inaccuracies or discrepancies when the same data is rolled up the hierarchy or used to calculate another data field Since most Anaplan models contain business-critical numbers for financial reporting, it is important to run thorough tests to ensure accuracy.<|endoftext|>
---
Page: 6 / 8
---
External Document © 2018 Infosys Limited {{ img-description: someone holds a tablet with cloud icons on it, in the style of streamlined forms, light sky-blue and black, daniel garber, emphasis on detail, use of common materials, light green and azure, traditional (top 50% height) }} 7. Integration testing Integration testing should be an integral part of the testing strategy for any Anaplan application. Typically, there are multiple integration points that may have to be tested in an Anaplan implementation program owing to: • Different data integration points – Disparate data sets are imported into the data hub and then into the Anaplan models using different kinds of integration options such as flat files, Anaplan connect, Informatica, in-built data import, etc.<|endoftext|>• Different Anaplan models – There may be more than one Anaplan model being implemented for a medium to large- scale organization for different kinds of planning. These need to be integrated with each other for smooth data flow. For instance, the output of a model built exclusively for sales forecasting may be an important parameter for another model that deals with sales planning across an organization’s regional sales territories. Thus, besides testing the integration points between these models, it is advisable to have dedicated end-to-end cycle/sprint testing with scenarios across all these models and the integrated systems • Different data sets – The periodic refresh of data sets used across Anaplan models happens through Informatica, manual refresh, tidal jobs, etc. QA teams should understand how each data set is refreshed, identify the relevant job names and test these to ensure that the latest active hierarchy is periodically refreshed and loaded into the model. This will eliminate inaccuracies arising from data redundancies owing to inactivation or changes in the data structure in the upstream systems Anaplan implementation programs can be either standalone or inter- linked models. Irrespective of the type of implementation, an approach that follows the 7 best practices outlined in this paper will help QA teams optimize their strategy for Anaplan test projects.<|endoftext|>
---
Page: 7 / 8
---
External Document © 2018 Infosys Limited Conclusion Anaplan’s cloud-based, enterprise-wide and connected platform can help global organizations improve their planning processes across various business sectors. The Anaplan model is a simple, integrated solution that enables informed decision- making along with accelerated and effective planning. The strategic approach is one that leverages domain knowledge, test data management, automation, data validation, and integration testing, to name a few. The Infosys 7-step approach to effective Anaplan testing is based on our extensive experience in implementing Anaplan programs. It helps testing teams benefit from the best strategy for QA across various business functions, thereby ensuring smooth business operations.<|endoftext|>External Document © 2018 Infosys Limited {{ img-description: two businessmen in business suits looking at a tablet device, in the style of light gray and teal, cultural documentation, orange and indigo, iso 200, george biddle, soft-focus, dotted (backgound) }}
---
Page: 8 / 8
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected References https://www.anaplan.com
***
|
# Infosys Whitepaper
Title: Self-service Testing - The antidote for stressed testing teams
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
VIEW POINT SELF-SERVICE TESTING: THE ANTIDOTE FOR STRESSED TESTING TEAMS Abstract Testing is a key element in the software development lifecycle that ensures the delivery of quality products. As it has matured over the years in terms of processes and tools, a recent trend from clients is the need for ‘self-service testing.’ This point of view provides insights into client expectations and the manner in which testing is being transformed from ‘process and tools’ to ‘self-service’ mode.<|endoftext|>Today, organizations and testing partners prefer one-time investments to build self-service platforms, and expect them to be operational throughout the IT journey at the lowest possible maintenance cost.<|endoftext|>{{ img-description : people looking at graphs on their laptops, in the style of soft edges and blurred details, silver and blue, use of paper, iso 200, mottled, hyacinthe rigaud, dotted }}
---
Page: 2 / 8
---
{{ img-description : three people who are focusing on a laptop, while two women look on, in the style of smilecore, expert draftsmanship, sky-blue and beige, hispanicore, progressive academia, contact printing, international style }} Introduction New testing processes and tools are developed continuously to improve the quality of software. Today, the IT industry is gaining momentum in agile deliveries and development and operations (DevOps), which is creating new possibilities by integrating development, test, and operations teams. To keep pace with these changes, testing processes and tools need transformation such that testing platforms can be accessible to all stakeholders and made simple. External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
Factors driving self-service testing The diagram below highlights the challenges, opportunities, industry trends, and testing tools that are key to self-service testing.Challenges Organizations view testing as an integrated activity of product development and they tend to shorten the entire development and deployment duration. This puts pressure on testing teams to work in parallel with design, development, and deployment. At the same time, constant ’requirement changes’ bring significant amount of risk and rework before products go live.<|endoftext|>Opportunities Agile methodologies and DevOps drive synergies across business, development, test, and deployment teams. They provide good opportunities to enhance testing processes ensuring that there is no information gap and reduced lead times. Open source tools and technologies are other opportunities providing more options to build automation frameworks.<|endoftext|>Industry trends Agile software development Agile software development refers to a group of software development method- ologies based on iterative development, where requirements and solutions evolve through collaboration between self-organ- izing, cross-functional teams. Scrum is the most popular agile methodology.<|endoftext|>DevOps DevOps involves coordinating software development, technology operations, and quality assurance to make these three, sometimes disparate entities, work together seamlessly. It can streamline business processes and add value by eliminating redundancies.<|endoftext|>Testing tools JavaScript, selenium combined with vendor tools are part of the “12 Test Automation Trends for 2016” published in https://www.joecolantonio.com/ that are driving the test automation and will continue to do so.<|endoftext|>External Document © 2018 Infosys Limited {{ img-description : a person typing on a laptop with graphs, in the style of free-associative, tenwave, precisionist style, 500–1000 ce, silver, associated press photo, simple and elegant style (bottom right) }}
---
Page: 4 / 8
---
Self-service testing platform Testing challenges, opportunities, and industry trends explained above are encouraging test practitioners to find innovation in automation. As a result, the industry is headed towards building a platform comprising of testing tools, custom frameworks, scripts to connect various applications, and environments.<|endoftext|>The below diagram illustrates the concept of self-service testing.<|endoftext|>Common platform for unit, system integration, and acceptance testing The platform will provide an interface for developers, business, and test teams to provide input data. The input data can be steps or controls that navigate through the user interface (UI) to complete transactions, simple actions, and data for a Web service call, or a mapping sheet to verify the transformation. The core part of the platform comprises of an engine with various automation tools, custom scripts based on the projects needs Selenium / Java frame work to expose them through UI (or) as API services. This can be further integrated with mechanisms like Jenkins and Maven to automate code deployment and invoke automated test scripts.<|endoftext|>It comes with an additional environment configuration panel to perform testing in DevOps, quality assurance (QA) and AT environments. A user-friendly dashboard is provided to verify the status and results of the requests raised.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
Benefits Self-service testing brings the following benefits in software delivery: Ready to use platform: Minimal technical knowledge is required. Business knowledge is sufficient. Simple input to perform the business transactions, data transformation, and validation.<|endoftext|>Single framework to cater to all testing needs: Self-service testing platform can be used, reused by business, development team, and for post Go-Live testing as well. Different environments can be configured and selected during the execution.<|endoftext|>Workflow automation: Workflow automation to execute batch jobs, perform the outcome validation, and create reports. Continuous integration: Continuous integration and development to invoke the validation process as soon as the code is checked-in in an environment.<|endoftext|>External Document © 2018 Infosys Limited {{ img-description : couple in business suits talking with each other, in the style of light magenta and light amber, innovating techniques, smilecore, blurry details, august friedrich schenck, nul group, optical (pin background).<|endoftext|>}}
---
Page: 6 / 8
---
Custom SOA Testing Framework – User-friendly excel to write down the actions as test steps to form the central test case repository along with a data dictionary sheet to key-in the input data. Check box option to execute the selected test cases only.<|endoftext|>Success Stories A self-service testing tool implemented for major retailers: A Selenium framework was used to integrate reports, download data, extract data from the database, and perform validation. The navigation steps in the predefined reports were saved as reused test scripts.<|endoftext|>External Document © 2018 Infosys Limited {{ img-description : a money sitting in a shopping cart next to a keyboard, in the style of primitivist frenzy, website, barbizon school, combining natural and man-made elements, digital distortion, hustlewave, yankeecore (Pin Background) }}
---
Page: 7 / 8
---
Conclusion As various stakeholders are working together to deliver business value within a reasonable time, this single, common testing platform becomes essential to cater to all phases of testing. This should prevent the duplication of the various types of testing and its phases. A self-service testing platform will provide flexibility to adapt to changes and enhancements and it will fulfill the testing needs across the software lifecycles while maintaining the uniqueness of testing.<|endoftext|>{{ img-description : four business people posing together in a office, in the style of light yellow and light red, liquid emulsion printing, light gray and light beige, light teal and orange, back button focus, balanced asymmetry, dark white and light navy (pin background) }} About the
Author References: Sri Rama Krishnamurthi Sri Rama Krishnamurthi is a Senior Project Manager with Infosys, having 16 years of software testing experience in various domains like geographic information system (GIS), finance, retail, and product testing. He has been practicing business intelligence (BI) testing for more than nine years now. Test data management (TDM) and master data management (MDM) testing are other areas of his expertise.<|endoftext|>• https://saucelabs.com/resources/webinars/test-automation-trends-for-2016-and-beyond • https://www.cprime.com/resources/what-is-agile-what-is-scrum/ External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, |
Continue # Infosys Whitepaper
product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Assuring the digital utilities transformation
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
PERSPECTIVE ASSURING THE DIGITAL UTILITIES TRANSFORMATION Gaurav Kalia Client Solution Manager
---
Page: 2 / 8
---
With a multitude of industries embracing the digital revolution, the utilities industry is also rapidly moving towards this transformation. A few key predictions that give an insight into which direction the utilities industry is heading towards in the coming years are listed below: • By 2018, 70 percent of the utilities industry will launch major digital transformation initiatives that will address at least one of these three areas: omni-experience, operating model, or information • Data and analytics will play a key role to drive greater results with energy efficiency programs 1. Latest trends in utilities • By 2019, 75 percent of utilities will deploy a comprehensive, risk-based cyber security strategy, representing a maturation from a compliance focus to security focus • The utility IT services spending is expected to grow at a compound annual growth rate (CAGR) of 5.9 percent from 2014 to 2018 • US$65 million will be spent by utilities on gamified applications by 2016 to engage consumers • In addition, utilities will be spending US$57.6 billion on smart grid as-a- service from 2014 to 2023 • About 624 million customers worldwide will use social media to engage with utilities by 2020 • 75 percent of utilities will rely on managed services and industry cloud by 2019, to predict asset failures or recommend solutions; however, they will retain control of asset optimization • 50 percent of the utilities in 2019 will spend five percent or more of their capital expenditure (CapEx) on operational technologies and the Internet of Things (IoT) to optimize distributed energy resources, field services, and asset operations External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
• To compete in redesigned markets and support new business models, 45 percent of utilities will invest in a new customer experience solution and 20 percent in a new billing system by 2017 • Forced by extreme weather events in 2017, 75 percent of utilities will make new IT investments to predict outages, reduce their duration by 5–10 percent, and improve customer communications • Internet sales is rising by 20 percent. Providing reliable service at peak loads is inevitable for businesses • Over 15 percent of the Internet users worldwide are physically challenged, which brings in the need for more interactive websites • According to the International Data Corporation’s (IDC) quarterly mobile phone tracker report, vendors have shipped 472 million smartphones in 2011 compared to about 305 million units shipped in 2010. It touched 982 million by the end of 2015. On demand access to the digital world means that this segment of customers have very different expectations • They learn through collaboration and networks – the average Facebook user spends 55 minutes on the site daily • They expect options and make decisions based on peer recommendations – 78 percent of consumers say they trust peer recommendations compared to the 14 percent who trust advertisements • They constantly give their opinions and view products, services, and brands online. There are over 1,500 blog posts every 60 seconds and 34 percent of the bloggers post opinions about different products and brands External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
Obviously, from the above trends, we can understand why digital transformations have become critical and why the IT landscape must adapt accordingly: • To move with the technological advancements • To sustain a competitive environment • For an enhanced customer service experience • Better business progression The key business drivers that are pushing the utility market towards digital transformation are: Competitive environment We understand from the trends that technology is advancing and there is an implicit need to use new-age and innovative knowledge. For example: • Electric utility companies are using smart meters to enable two-way communication to reduce human intervention (removal of call centers) • Water and wastewater utility companies are looking towards 2. What do the above trends suggest? deploying geospatial information systems (GIS) to increase efficiency during the sampling, routing, and analyzing phases of their supply chain • Implementation of business intelligence (BI) analytics reporting solutions to enable an enterprise to make positive decisions • Some other key trends seen are the adoption of smart grid technology, intelligent devices in the grid – machine-to-machine (M2M), home area network (HAN), smart home, EV, and cyber security Customer experience Digital transformation signifies an enhanced customer experience. Evidently, with technology rising constantly we see faster and more reliable websites with better user experiences. This is because of the popularity of e-commerce applications / packages across all the leading sectors. Moreover, we have the popularity and the potential of social media with which consumers are changing the way they buy utility products. This gives a direction to utility companies to add new value services and social networking tools to their websites. Regulatory obligations As utility companies comply with various regulatory obligations (environmental and non-environmental), there are new policies and standards that come up. Lack of tangible systems are driving clients to go for digital transformation. Operational efficiency Utility companies will be looking towards better asset management and seamlessly integrated systems so that real-time information of assets is available at all times. This is applicable for both energy and water utility companies. Lastly, since most of the utility companies are using older systems built on outdated platforms and technologies, the problems of lack of support and high maintenance costs can be addressed using newer technologies. This clearly explains why digital transformations have become the need of the hour for the utility sector as well.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
3. Key quality assurance (QA) focus areas in digital transformational programs After understanding future trends, the utility market must comprehend the urgency of digital transformation to keep up with the advancing developments. Let us understand the aspects that require special attention during QA in digital transformation programs.<|endoftext|>Challenges QA focus areas Agility With the digital revolution and constantly growing technology, we see a myriad of changes and the need to pilot newer and adaptable strategies. This is to cater to the needs of a fast-growing and agile atmosphere. The necessity is to encompass the entire testing life cycle – right from requirement analysis to reporting. A well-defined, agile QA strategy will be required during such digital transformational programs to be able to cope with changing requirements.<|endoftext|>Exploding data and reporting In our grandparents’s generation, inputs were manageable. However, in this century, there are terabytes of customer data to manage. With the very real possibility of further increase in this figure, it is an unsaid challenge to maintain and secure consumer databases. With a change in the infrastructure, we have to deal with data migration and more importantly, an impact-less data migration. Robust and proven data migration testing services will be required during such migration activities.<|endoftext|>Performance and security There is an increase in online transactions and as discussed above, the advent of big data and the multitude of interconnected applications and devices, will pose a challenge for application performance and data privacy. When a customer performs any kind of transaction over a web-based e-commerce site, he/she is thinking of convenience and most importantly, security. Obviously, the performance and speed of a website matter because nobody wants a slow site in these fast times. This brings in the need for special focus on performance and security testing of these applications.<|endoftext|>Mobility With the anytime-anywhere nature of today’s customer, it is significant that flexible aspects of testing are devised on different device configurations. Heeding the need for mobility, keep in mind that this is similar in terms of the challenges of performance and security generally seen in web-based applications. Mobility testing tools and solutions will be of great importance during such initiatives.<|endoftext|>Seamless customer experience The ultimate goal of a seamless customer experience for any company is to make business easy for its customers. It is about finding new ways to engage with the audience in the right way. It is necessary to understand that today’s customer is looking for real |
Continue # Infosys Whitepaper
-time and expert interaction, which is possible with interactive websites. Usability and functional validation will be key focus areas while testing a website that has new features, such as live chat, gamification, online surveys, etc. External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
Need for skilled resources The increase in challenges demands skilled testing staff who are innovative and experienced in their utilities domain with niche skills. The focus is on a continuous learning attitude and out-of-the-box thinking ability. Experienced QA resources with domain expertise will be required to support such large digital transformation programs.<|endoftext|>Limited budget Capital is the prime requirement after understanding the necessity of modernization. Clearly, we all want to gain more with less investment. Therefore the need of the hour is to enable test automation and innovative ideas to ensure cost effectiveness. Innovative test automation solutions will be the answer to this most important and silent question of capital.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
4. Conclusion Digital transformation is urgent and extremely significant for utility companies if they are to move with the fast-changing times and always be a step ahead of the competition. Nevertheless, while doing so, quality assurance (QA) is a major aspect and it is important to devise solutions to help the constantly changing environment. This can be achieved by modernizing the infrastructure and having a continuous improvement attitude.<|endoftext|>© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Best practices to ensure seamless cyber security testing
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 4
---
VIEW POINT BEST PRACTICES TO ENSURE SEAMLESS CYBER SECURITY TESTING Abstract In a post COVID-19 world, the need to become digitally-enabled is more pressing than ever before. Enterprises are accelerating digital strategies and omni-channel transformation projects. But while they expand their digital footprint to serve customers and gain competitive advantage, the number and extent of exposure to external threats also increases exponentially. This is due to the many moving parts in the technology stack such as cloud, big data, legacy modernization, and microservices. This paper looks at the security vulnerabilities in open systems interconnection (OSI) layers and explains the best practices for embedding cyber security testing seamlessly into organizations.<|endoftext|>
---
Page: 2 / 4
---
External Document © 2021 Infosys Limited Introduction Open systems interconnection (OSI) comprises many layers, each of which has its own services/protocols. These can be used by hackers and attackers to compromise the system through different types of attacks.<|endoftext|>Fig 1: Points of vulnerability across OSI layers For some OSI layers like Transport, Session, Presentation, and Application, some amount of exposure can be controlled using robust application-level security practices and cyber security testing. From a quality engineering perspective, it is important for testers to be involved in the digital security landscape. While there is no single approach to handle cyber security testing, the following five best practices can ensure application security by embedding cyber security testing seamlessly into organizations: 1. Defining and executing a digital tester’s role in the DevSecOps model 2. Understanding and implementing data security testing practices in non- production environments 3. Security in motion – Focusing on dynamic application security testing 4. Understanding the vulnerabilities in infrastructure security testing 5. Understanding roles and responsibilities for cloud security testing Best practice 1: Defining and executing a digital tester’s role in the DevSecOps model DevSecOps means dealing with security aspects as code (security as a code). It enables two aspects, namely, ‘secure code’ delivered ‘at speed’. Here is how security-as- a-code works: • Code is delivered in small chunks. Possible changes are submitted in advance to identify vulnerabilities • The application security team triggers scheduled scans in the build environment. Code checkout happens from SVN or GIT (version control systems) • Code is automatically pushed for scanning after applying UI and server- based pre-scan filters. Code is scanned for vulnerability • Results are pushed to the software security center database for verification • If there are no vulnerabilities, the code is pushed to quality assurance (QA) and production stages. If vulnerabilities are found, these are backlogged for resolution DevSecOps can be integrated to perform security tests on networks, digital applications and identity access management portals. The tests focus on how to break into the system and expose vulnerable areas.<|endoftext|>Physical Layer Data link Layer Network Layer Transport Layer Session Layer Presentation Layer Application Layer Transmission media, bit stream (signal) and binary transmission Ethernet, 802.11 protocol, LANs, Fiber optic, Frame protocol Networks, IP address, ICMP protocol, IPsec protocol, OSPF protocol TCP, UDP, SSL, TLS - protocols Establishing session communications Data representation, Encryption and Decryption File transfer protocol, simple mail transfer protocol, Domain Name System SQL injec�on, Cross-site scrip�ng so�ware a�ack (persistent and non-persistent), Cross-site request forgery, Cookie poisoning SSL a�acks, HTTP tunnel a�acks Session hijacking, sequence predic�on a�ack, Authen�ca�on a�ack Port scanning, ping flood and Distributed Denial-of- Service (DDoS) a�ack External a�acks such as packet sniffing, Internet Control Message Protocol (ICMP) flood a�ack, Denial of Service (DoS) a�ack at Dynamic Host Configura�on Layer (DHCP), MAC address spoofing etc.- These are primarily internal a�acks Data the�, Hardware the�, Physical destruc�on, Unauthorized access to hardware/connec�ons etc., OSI Layers Services/Protocols Types of attacks
---
Page: 3 / 4
---
External Document © 2021 Infosys Limited Best practice 2: Understanding and implementing data security testing practices in non- production environments With the advent of DevOps and digital transformation, there is a tremendous pressure to provision data quickly to meet development and QA needs. While provisioning data across the developing pipelines is one challenge, another is to ensure security and privacy of data in the non-production environment. There are several techniques to do this as discussed below: • Dynamic data masking, i.e., masking data on the fly and tying database security directly to the data using tools that have database permissions • Deterministic masking, i.e., using algorithm-based data masking of sensitive fields to ensure referential integrity across systems and databases • Synthetically generating test data without relying on the production footprint by ensuring referential integrity across systems and creating a self-service database • Automatic clean-up of the sample data, sample accounts and sample customers created Best practice 3: Security in motion – Focus on dynamic application security testing This test is performed while the application is in use. Its objective is to mimic hackers and break into the system. The focus is to: • Identify abuse scenarios by mapping security policies to application flows based on the top 10 security vulnerabilities for Open Web Application Security Project (OWASP) • Conduct threat modeling by decomposing applications, identifying threats and categorizing/rating threats • Perform a combination of automated testing and black-box security/ penetration testing to identify vulnerabilities Best practice 4: Understanding the vulnerabilities in infrastructure security testing There are infrastructure-level vulnerabilities that cannot be identified with UI testing. Hence, infrastructure-level exploits are created and executed, and reports are published. The following steps give insights to the operations team to minimize/eliminate vulnerabilities at the infrastructure layer: • Reconnaissance and network vulnerability assessment including host fingerprinting, port scanning and network mapping tools • Identification of services and OS details on hosts such as Domain Name System (DNS) and Dynamic Host Configuration Protocol (DHCP) • Manual scans using scripting engine and tool-based automated scans • Configuration reviews for firewalls, routers, etc.<|endoftext|>• Removal of false positives and validation of reported vulnerabilities Best practice 5: Understanding roles and responsibilities for cloud security testing With cloud transformation, cloud security is a shared responsibility. Cloud security testing must involve the following steps: • Define a security validation strategy based on the type of cloud service models: • For Software-as-a-Service (SaaS), the focus should be on risk-based security testing and security audits/ compliance • For Platform-as-a-Service (PaaS), the focus should be on database security and web/mobile/API penetration testing • For Infrastructure-as-a-Service (IaaS), the focus should be on infrastructure and network vulnerability assessment • Conduct Cloud Service Provider (CSP) service integration and cyber security testing. The focus is on identifying system vulnerabilities, CSP account hijacking, malicious insiders, identity/access management portal vulnerabilities, insecure APIs, shared technology vulnerabilities, advanced persistent threats, and data breaches • Review the CSP’s audit and perform compliance checks These best practices can help enterprises build and create secure applications right from the design stage. Infosys has a dedicated Cyber Security Testing Practice that provides trusted application development and maintenance frameworks, security testing automation, security testing planning, and consulting for emerging areas. It aims to integrate security into the code development lifecycle through test automation with immediate feedback to development and operations teams on security vulnerabilities. Our approach leverages several open-source and commercial tools for security testing instrumentation and automation.
---
Page: 4 / 4
---
© 2021 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names |
Continue # Infosys Whitepaper
and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected Conclusion The goal of cyber security testing is to anticipate and withstand attacks and recover quickly from security events. In the current pandemic scenario, it should also help companies adapt to short-term change. Infosys recommends the use of best practices for integrating cyber security testing seamlessly. These include building secure applications, ensuring proper privacy controls of data in rest and in motion, conducting automated penetration testing, and having clear security responsibilities identified with cloud service providers.<|endoftext|>About the
Authors Arun Kumar Mishra Senior Practice Engagement Manager, Infosys Sundaresasubramanian Gomathi Vallabhan Practice Engagement Manager, Infosys References 1. https://www.marketsandmarkets.com/Market-Reports/security-testing-market-150407261.html 2. https://www.infosys.com/services/validation-solutions/service-offerings/security-testing-validation-services.html
***
|
# Infosys Whitepaper
Title: Infosys Oracle Package | Independent Validation and Testing Services
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
WHITE PAPER ASSURING SUCCESS IN BLOCKCHAIN IMPLEMENTATIONS BY ENGINEERING QUALITY IN VALIDATION Arvind Sundarraman, Business Development Executive, Infosys Validation Solutions
---
Page: 2 / 8
---
Introduction to blockchain and smart contracts A blockchain is a decentralized and distributed database that holds a record of transactions linked to it in blocks which are added in a liner and chronological order. It provides a secure, transparent, and immutable source of record by its design that has technology and infinite possibilities to change and revolutionize the way transactions are managed in a digital world.<|endoftext|>Figure 1: Transaction linkage within blocks in a blockchain Figure 2: Peer-to- peer architecture of blockchain using smart contracts in public and private network The implementation of this technology is generally carried out in a decentralized peer-to-peer architecture with a shared ledger that is made available to the authorized participants in the private and public network. This ensures that the transactions are captured within the blocks of information which are continually updated and securely linked to chain, thereby ensuring visibility of the changes as well as providing a foolproof way of handling transactions mitigating the possibility of double spending or tampering. Smart contracts are protocols or rule sets embedded within the blockchain which are largely self-executing and enforce a contract condition. They ensure that the terms specified within the contract are executed on their own when the transactions fulfil the specified condition within the contract rules and made visible to everyone on the network without any intervention, thereby guaranteeing autonomy, safety, and trust. Smart contracts also ensure that the transactions are carried out instantaneously on the preconditions set within the contract being met.<|endoftext|>Smart contracts are an integral part of the blockchain and go hand in hand in ensuring the success of the distributed nature of the ledger that is the core of blockchain.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
Source : https://en.wikipedia.org/wiki/Blockchain_(database) Figure 3: Graph highlighting the rise in the adoption of blockchain in terms of transaction per day Blockchain and the digital ecosystem In today’s world, digital transactions are growing exponentially and the phenomenon is predicted to sustain and even outperform its climb in the next decade. Digital transactions are gaining popularity globally due to ease of use, improved security, and faster mechanism of managing transactions.<|endoftext|> Blockchain technology provides a secure way of handling transactions online and hence is of enormous relevance. The most widely known implementation of blockchain technology is in the financial sector and Bitcoin as a crypto currency payment system was the pioneering system developed on this technology.<|endoftext|>The value of blockchain technology though it is not limited to digital wallets and payments systems, its application in a wider context has gained more relevance in the recent times. Transactions through blockchain have also had an impressive surge similar to the growth in digital transactions. The graph below highlights the rise in usage of blockchain in terms of transactions. This also reflects the increase in adoption and usage of this technology across domains for various uses that initially was perceived to be mainly in the financial sector of payments and transactions.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
A number of possible use cases across various domains where blockchain technology can be applied are detailed below and this shows the potential that blockchain holds across various industries and segments.<|endoftext|>Financial services • Post-trade settlement, person-to-person (P2P) payments, trade finance, know your customer (KYC), global remittance, syndicated loans, asset leasing, gifting, and mortgage services Insurance • Automated claims processing, fraud detection, P2P insurance, travel insurance, Reinsurance, and KYC Healthcare • Patient records and automated claim processing through smart contracts Manufacturing – Supply chain • Tracking product origin, inventory management, smart contracts for multi- party agreements, and digitization of contracts / documents Retail • Retail supply chain, reward points tracking, blockchain based market place, and inventory management • Energy and utility • Energy trading and P2P power grid Media and entertainment • Anti-piracy / copy rights management, royalty management, and crowd funding of new content • Transportation • Rider passenger coordination, review authentication, and Bitcoin payment Communication • Billing systems and call detail record (CDR) • Roaming and network sharing access control • Provisioning and identity management • Mobile wallets and money Challenges in testing blockchain implementations Validating and verifying that an implementation of blockchain offers a number of challenges due to the inherent structure of technology as well as the distributed nature of the system.<|endoftext|>• Technology stack A primary factor that influences the required level of validation is dependent on whether the implementation is on a public platform like Etherum or Openchain or on a self-setup or customized platform that is purpose built for the needs of the organization. The latter needs more setup and effort in testing. Open source and popular platforms like Etherum have recommendations and guidance on the level of tests and have more mature methods where an in-house implementation needs a detailed test strategy frame based on the functionality that is customized or developed.<|endoftext|>• Test environment The availability and utilization of a test platform that provides a replica of the implementation is also a need and if it is not available, considerable time needs to be spent on setting up or spawning from the real implementation.<|endoftext|> Blockchain implementations like BitCoin (testnet) and Etherum (modern) provide test instances that are distinct and separate from the original while providing means to test advanced transaction functionalities in a like for like mode.<|endoftext|>• Testing for integration An implementation of blockchain within a stack of a company is expected to have interfaces with other applications. Understanding the means of interface and ensuring the consistency with the existing means is key to assure that there are no disconnects on launch. A detailed view of the touch points and the application programming interfaces (APIs) that act as points of communication need to be made available to the testing team in advance so that the appropriate interfaces can be exercised and tested during the validation phases.<|endoftext|>• Performance testing Major problems that could affect an implementation is in estimating and managing the level of transactions that are anticipated on the production systems. One of the key issues with the Bitcoin implementation has been with delays in processing transactions due to a surge in the usage. This has led to business withdrawing from accepting this form of crypto currency as well as partners leaving the consortium.<|endoftext|> The need within the chain for verification of the transactions by miners can complicate the time taken to process and confirm a transaction and hence during validation, a clearly chalked out strategy to handle this needs be outlined and applied.<|endoftext|>• Security Though the blockchain technology has inherent security features that has made the protocol popular and trusted, this also places a lot of importance on the intended areas where this is applied that are generally high-risk business domains with a potential for huge implications if security is compromised External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
Figure 4: Illustration of the management system built on blockchain technology Validation in real-world situation The construct of validating a real world implementation and ensuring that the testing verifies the intended use case is of paramount importance as this is more complex and requires specific focus in understanding the needs as well as drafting the coverage that is required within the test cycle.<|endoftext|>For a more detailed view of how the testing space spans on the ground, let us consider an example of an application of this technology with an assumed integration of the feature for a telecom operator who implements this application to manage user access and permissions and handover in roaming within the operations support systems (OSS) business support systems (BSS) stack of the operator. It is assumed that the communication service provider (CSP) and the partner CSPs providing service to the customer during the roaming have an agreement for implementing a common blockchain based platform as a solution to manage the transaction in each other’s network as well as communicating with the CDR of the subscribers. The implementation should ensure that the authentication of the user, authorization of the service, and billing systems are managed between the CSPs while the user moves from the home mobile network to the visited mobile network of the partner CSP.<|endoftext|>A depiction below shows the illustration of the |
Continue # Infosys Whitepaper
management system employed in this scenario that would be built on blockchain technology.<|endoftext|>The above model provides a secure medium of authentication of the user who is entering the partner network assuming that the identity of the user can be verified over a shared ledger system and signed by the respective base owners of the users. Similarly, the CDRs that relate to the roaming duration can be cleared over a joint system of clearing house that is built on blockchain technology. This would ensure instantaneous management of transactions and charges can be setup based on contracts to ensure that the information is relayed back to the customer instantaneously providing far more agility and trust in the management of the subscribers in other networks.<|endoftext|>With this system in design, let us look at the key activities that are needed to ensure that the implementation meets the desired objectives outlined.<|endoftext|>• System appreciation The early involvement of the testing team is extremely important since there should be a view of the applications within the stack that would be modeled on the new technology along with the affected system components. A detailed view of the impacted components needs to be included in the testing plan and a strategy put up in place for each of these components that are impacted. For example, in the above case, the billing systems as well as the user identity managed systems are impacted where the customer relationship management (CRM) system may not be directly affected by the change. Testing scope for the impacted components would be higher than the others where a functional workflow with regression test on the CRM systems could suffice. • Test design assurance In this phase, with an early system view, the team needs to put up a detailed level test strategy and assign a External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
traceability to the requirements. Specifically to blockchain testing, the key components that need to be checked in this phase include • Build a model of the structure of the blocks, the transactions, and the contracts that need to be tested • Outline a use case for each individual section and end points to be validated, i.e. if the transaction is to pass the user credentials to the visiting network for authentication, then detail the scenarios under which testing could be required.<|endoftext|>• Estimate non-functional requirements (NFRs) as well as security testing needs. In the above example, the number of CDRs logged in the clearing house over a month should give a fair idea of the number of transactions which are expected and hence performance tests can be considered at this level. For security testing, the APIs that are exposed need to be considered as well measuring the inherent level of security in the blockchain technology (private or public) • Test planning Test planning needs to consider the full test strategy as well as the other methodology to test and place to conduct the testing. For the above examples, a full test strategy should outline the availability or non-availability of a test environment or a testnet and if one is not available, a view of how a private one can be setup. Also a low level view of how testing is conducted along with the approach to it in each phase from unit till integration, functional tests needs to be finalized. It is recommended that the volume of tests follow a pyramid structure and there is a good amount of lower level tests (unit) to identify and correct the systems prior to integrating them with the rest of the systems in the stack. From the illustration below, it is assumed that an Etherum based blockchain solution has been implemented for managing user handling during roaming and there is an existing Amdocs billing system that it integrates with.<|endoftext|>Testing phase Developer / Unit testing System testing Billing integration testing Functional verifcation / UI tests 2500 Test-driven development (TDD) approach with a suitable framework Verify refection of CDRs from clearing house on the billing system (Amdocs) Automated tests for front end verifcation of bills logged to the customers. For E.g. Selenium scripts for web testing Verify contracts, blocks, and blockchain updates with auto triggered roaming use cases setup on contracts through scripts 1000 275 50 Volume of tests Test methodology and tools Table 1: Test phases with volumes of tests, methodologies, and tools • Use cases map The test plan needs to be verified and validated by business for inclusion of the user scenarios which map to detailed level test cases. A complete coverage can be ensured only when all the functional validation covers testing across the transactions, i.e., the user moving from the home- based network to the visited or the external network, the blocks, i.e., the CDRs that are transmitted between the network partners and the contracts i.e. the rule allowing the user roaming permissions on a partner network are tested in all the different scenarios that govern the functionality.<|endoftext|>• Test execution and result verification Execution can be ideally automated with scripting following a TDD approach for unit testing on a suitable framework and following a similar approach as outlined in the test plan. The results then need to be consolidated and verified back. A list of defects identified needs to be reported along with the test execution status and conformance. Focus in test execution should be on unit and system tests as a higher level of defects can be detected in the core blockchain adoption in these phases. The integration and functional level tests can then be carried out to verify the integration of the core with the existing framework. However, the rule set for the blockchain needs to be verified within the unit and system testing phases.<|endoftext|> A guidance on the test strategy across various test phases is provided below with a call-out on the key activities to be catered to manage within each phase.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
Figure 5: Test strategy across various test phases with a call-out on the key activities External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
Summary The key to successfully adopt a blockchain methodology rests in a well-balanced detailed approach to design and validation. While the mechanism of validating and verifying is largely similar to any other system, there is a necessity to focus on specific areas and evolving a test strategy in line with the implementation construct that is applied for the domain. Infosys Validation Solutions (IVS) as a market leader in quality assurance offers next-generation solutions and services and engages with more than 300 clients globally. With focussed offerings across various domains and segments and with key offering focussed in digital assurance space, Infosys Validation Solutions are committed to deliver high-quality products by adopting newer methodologies and solutions for partnering enterprises across the globe.<|endoftext|>© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Choosing the right automation tool
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
VIEW POINT CHOOSING THE RIGHT AUTOMATION TOOL AND FRAMEWORK IS CRITICAL TO PROJECT SUCCESS Harsh Bajaj, Technical Test Lead ECSIVS, Infosys
---
Page: 2 / 8
---
Organizations have become cognizant of the crucial role of testing in the software development life cycle and in delivering high quality software products. As the competition in the IT sector grows stiffer, the pressure to deliver larger number of high quality products with fewer resources in limited time is increasing in intensity.<|endoftext|>During development cycles software tests need to be repeated to ensure quality. Every time the source code is modified, test cases must be executed. All iterations in the software need to be tested on all browsers and all supported operating systems. Manual execution of test cases is not only a costly and time-consuming exercise, but it is also prone to error.<|endoftext|>Automation testing addresses these challenges presented by manual testing. Automation tests can be executed multiple times across iterations much faster than manual test cases, saving time as well as cost. Lengthy tests which are often skipped during manual test execution can be executed unattended on multiple machines with different configurations, thus increasing the test coverage. Automation testing helps find defects or issues which are often overlooked during manual testing or are impossible to detect manually – for example, spelling mistakes or hard coding in the application code. Automaton also boosts the confidence of the testing team by automating repetitive tasks and enabling the team to focus on challenging and high risk projects. Team members can improve their skill sets by learning new tools and technologies and pass on the gains to the organization. The time and effort spent on scientifically choosing a test automation product and framework can go a long way in ensuring successful test execution. Let us take a closer look at various factors involved in the selection process. Introduction External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
Identify the right automation tool Identification of the right automation tool is critical to ensure the success of the testing project. Detailed analysis must be conducted before selecting a tool. The effort put in the tool evaluation process enables successful execution of the project. The selection of the tool depends on various factors such as: • The application and its technology stack which is to be tested • Detailed testing requirements • Skill sets available in the organization • License cost of the tool There are various functional automation tools available in the market for automating web and desktop applications. Some of these are: Tool Description QTP(Quick Test Professional) / UFT (Unified Functional Testing) Selenium Watir (Web Application Testing in Ruby) Geb Powerful tool from HP to automate web and desktop applications Open source automation tool for automating web applications Open source family of Ruby libraries for automating web browsers Open source automation tool based on Groovy Comparison Matrix While analyzing various automation tools, a comparison of key parameters helps select the right tool for the specific requirements of the project. We have created a comparison chart of tools listed above based on the most important parameters for automation projects. Organizations can assign values to these parameters as per their automation requirements. The tool with the highest score can be considered for further investigation.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
Parameters Criteria UFT Selenium Watir GEB Ease of Adoption License cost QTP is HP licensed product available through single- seater floating or concurrent licenses Selenium is an open-source software and is free Watir is an open- source (BSD) family of Ruby libraries for automating web browsers and is free GEB is open source software based on Groovy and is free.<|endoftext|>Ease of support Dedicated HP support User and professional community support available Limited support on open source community Limited support on open source community Script creation time Less Much more More More Scripting language VB Script Java, CSharp, Python, Ruby, Php, Perl, JavaScript Ruby Groovy Object recognition Through Object Spy Selenium IDE, FireBug, FirePath OpenTwebst (web recorder) GEB IDE Learning time Less Much more Much more Much more Script execution speed High Low Low Low Framework In-built capability to build frameworks such as keyword- driven, data-driven and hybrid JUnit, NUnit, RSpec, Test::Unit, TestNG, unittest Ruby supported Frameworks - RSpec, Cucumber, Test::Unit Grails, Gradle, Maven Continuous integration Can be achieved through Jenkins Achieved through Jenkins Achieved using Ruby script Achieved using Grails, Gradle plug-in along with Jenkins Gradle plug-in Non browser-based app support Yes No No No Operating system support Windows 8/8.1/7/ XP/Vista (No other OS) Windows, MAC OS X, Linux, Solaris (OS support depends on web-driver availability) Windows 8.1, Linux 13.10, MAC OS X 10.9, Solaris 11.1 (need JSSH compiled) Windows XP/Vista/7, Linux, DOS (OS support depends on web-driver availability) Browser Support IE (version 6-11), Firefox (version 3-24), Chrome (up to version 24) Firefox, IE, Chrome, Opera, Safari Firefox, IE, Chrome, Opera, Safari Firefox, IE, Chrome, Opera, Safari Device Support Supports iOS, Android, Blackberry, and Windows Phone via licensed products such as PerfectoMobile and Experitest Two major mobile platforms iOS and Android Two major mobile platforms iOS and Android Driver available for iOS (iPhone and iPad) Ease of Scripting and Reporting Capabilities Tools Usage External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
What is a test automation framework? A test automation framework is a defined, extensible support structure within which the test automation suite is developed and implemented using the selected tool. It includes the physical structures used for test creation and implementation as well as the logical interaction between components such as: • Set of standards or coding guidelines, for example, guidelines to declare variables and assign them meaningful names • Well-organized directory structure • Location of the test data • Location of the Object Repository (OR) • Location of common functions • Location of environment related information • Methods of running test scripts and location of the display of test results A well-defined test automation framework helps us achieve higher reusability of test components, develop the scripts which are easily maintainable and obtain high quality test automation scripts. If the automation framework is implemented correctly it can be reused across projects resulting in savings on effort and better return on investment (ROI) from the automation projects. Let us discuss some key frameworks available in the industry today.<|endoftext|>After analyzing the pros and cons of these frameworks, most companies opt for the hybrid framework. This allows them to benefit from multiple best functional test automation frameworks available in the industry.<|endoftext|>Modular Framework Data-driven Framework Hybrid Framework • In the modular approach reusable code is encapsulated into modular functions in external libraries. These functions can then be called from multiple scripts as required. • This framework is well suited in situations where the application includes several reusable steps to be performed across test scripts.<|endoftext|>• In the modular approach reusable code is encapsulated into modular functions in external libraries. These functions can then be called from multiple scripts as required. • This framework is well suited in situations where the application includes several reusable steps to be performed across test scripts.<|endoftext|>• The hybrid model is a mix of the data-driven and modular frameworks • This framework mixes the best practices of different frameworks suitable to the automation need.<|endoftext|>Keyword-driven Framework • In the keyword-driven framework, a keyword is identified for every action that needs to be performed and the details of the keyword are given in a spreadsheet.<|endoftext|>• This framework is more useful for non-technical users to understand and maintain the test scripts.<|endoftext|>• Technical expertise is required to create a complex keyword library • Creating such a framework is a time-consuming task.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 6 |
Continue # Infosys Whitepaper
/ 8
---
Implementing Hybrid Framework with Selenium • Store the test data (any user input data) in an Excel file.<|endoftext|>• Store the environment related information (for example, QA, UAT, Regression) in a property file.<|endoftext|>• Store various objects in the application on which user action needs to be taken in object repository files.<|endoftext|>• Test suite contains the logic to verify acceptance criteria mentioned in the requirement.<|endoftext|>• Execute the script on various browsers as per the need.<|endoftext|>• Generate the reports capturing screenshots and pass/fail results. To get advanced reports in Selenium use any testing framework such as TestNG, JUnit.<|endoftext|>Now let us see how to implement a hybrid framework with Selenium as an automation tool. The key points in the implementation are: Test Data (Excel) Data Layer (Input test data, environment) Property File Object A Object B Object C Object Repository Test Scripts Test Suite ANT Builder Execute Reporting Module [Captured Screenshots, XML Based Logs and HTML Based Reports] Application under Test (AUT) External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
Implementing Hybrid Framework with QTP/ UFT • Store the test data (any user input data) in an Excel file.<|endoftext|>• Create a run manager sheet to drive the test execution. • Store the environment-related information in property file. • Store the objects in the application on which user actions are taken in object repository files. • Divide the test cases into modular functions, keeping common functions separately to be used across projects. • Include main script common functions, object repository and test data. Generate different types of reports as per the business requirements.<|endoftext|>Let us discuss how to implement a hybrid framework with QTP/ UFT as an automation tool. The key points in the implementation are: Test Data (Excel) Data Layer Environment data App 1 App 2 App 3 Object Repository Reusable functions Test Suite Test Suite Execute Execution Summary Application under Test (AUT) Reporting Module Test Script Results Error Log External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
Executing Proof of Concept for the Selected Tool The last phase of tool evaluation is proof of concept (PoC). The tool selected may satisfy your criteria conceptually, but it is advisable to test the tool using a few scenarios. Almost every tool vendor provides an evaluation version of their tool for a limited time period. The following steps need to be considered during the PoC: • Choose a few scenarios in such a way that they cover different objects and controls in the application.<|endoftext|>• Select the tool(s) based on a comparative study.<|endoftext|>• Automate the chosen scenarios using the selected tool(s).<|endoftext|>• Generate and analyze various reports.<|endoftext|>• Analyze the integration of the tool with other tools such as test management tool available, for example, QC (Quick Center) and with continuous integration tools such as Jenkins.<|endoftext|>While evaluating multiple tools, generate a score-card based on various parameters such as ease of scripting, integration, usage, reports generated and chose the tool with the maximum score. When the POC is completed, the team can be more confident about successfully automating the application using the selected tool.<|endoftext|>Conclusion The process of automation framework design and development requires detailed planning and effort. To achieve the desired benefits, the framework must be accurately designed and developed. Such a framework can then be used across projects in an organization and provides substantial ROI. When choosing an automation framework, it is crucial to ensure that it can easily accommodate the various automation testing technologies and changes in the system under test.<|endoftext|>One of the key factors contributing to the accomplishment of any test automation project is identifying the right automation tool. A detailed analysis in terms of ease of use, reporting and integration with various tools must be performed before selecting a tool. Though such selection processes call for focused effort and time, this investment is worth making because of the great impact it has on the success of the automation project.<|endoftext|>About the
Author Harsh Bajaj is a Technical Test Lead with Infosys Independent Validation Services Team. She has led various test automation projects in the telecom domain. She is proficient in various test management and automation tools. About the Reviewer Gautham Halambi is a Project Manager with Infosys and has more than 9 years of experience. He has worked on multiple telecom testing projects and has led and managed large manual and automation testing teams.<|endoftext|>References www.seleniumhq.org www.watir.com, http://www.gebish.org/ http://www.automationrepository.com/ http://en.wikipedia.org/wiki/Keyword-driven_testing © 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: Need of a Test Maturity Model
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
PERSPECTIVE NEED FOR A COMPREHENSIVE TEST MATURITY MODEL - Reghunath Balaraman, Harish Krishnankutty Abstract Constant change and ever growing complexity of business has necessitated that IT organizations and specifically Test/QA organizations make thorough and periodic introspection of their processes and delivery capabilities. This is necessary to ensure that at all possible times the Test/QA organization, and its systems and processes, are relevant and available to support business needs. While there are multiple maturity models in the marketplace to help this process, there are yet not comprehensive enough and fail to provide today’s dynamic businesses the much needed flexibility and power of customization. The need of the hour is a comprehensive Test/QA maturity assessment model, which not only answers the requirements of customization and flexibility, but also ensures relevance in today’s complex delivery structures of multi-vendor scenarios, multi-location engagements, global delivery models, etc.<|endoftext|>
---
Page: 2 / 8
---
and revolutionary technology trends have changed the role of IT organizations in supporting business growth. Though the recession created a scarcity of capital for IT investments, the demands and expectations on the ability of IT to quickly adapt and support business, has only increased multifold. In addition, rapid/ revolutionary changes in technologies are forcing companies to recast their entire IT landscape. All these factors together have created a complex environment where the demand for change from the business is high, the capital for investment is scarce and time-to-market is critical to success. The constant change and ever growing complexity of the business environment, and the risks associated, have necessitated that organizations make thorough and periodic introspection of their processes and delivery capabilities to ensure operation in an efficient, effective and agile manner. A key requirement, amongst all, is the ability to ensure that there are adequate controls in place to ensure quality in the processes and the outcomes, no matter the extent of change being introduced in business or technology landscape of the organization. This is where the Test/QA organization’s capability comes under the scanner. An organization’s ability to assure and control quality of its IT systems and processes largely determines the success or failure of the business in capturing, servicing and expanding its client base. When Quality and reliability play a very significant role in determining the current and future course of business outcomes delivered, it is imperative that the Quality Assurance function itself is evaluated periodically for the relevance, effectiveness and efficiency of the processes, practices and systems. An objective self-introspection is the ideal first step. However, most often than not, QA organizations fall short of using this process to unearth gaps in their current systems and practices. Also, many organizations may have lost touch with the ever-evolving world of QA to be aware of the leading practices and systems available today. This necessitates an independent assessment of the organization’s QA practices to benchmark it against the practices prevalent in the industry and to get that all-important question “where do we stand in comparison with Industry standards?” answered. Also, the assessment of maturity in testing processes becomes critical in laying out the blue-print for a QA/ testing transformation program that would establish the function as a fit-for-purpose one, and often world-leading. The global economic crisis External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
Limitations of traditional approaches to Test/QA Maturity Assessment There are several models, proprietary and others, available for assessing the maturity of the IT processes and systems, including quality assurance. Most of these are developed and promoted as models helping an organization to certify capabilities in one or more areas of the software development lifecycle. Like all other models and frameworks that lead to certification, these maturity models too have a fixed framework for an organization to operate within, and provide very little flexibility to address specific assessment needs. Further, these models fail to help organizations assess overall process maturity due to the following limitations: Inability to accommodate and account for heterogeneous delivery structures Over the last decade or so, most organizations have evolved into a heterogeneous composition of internal staff and service providers, delivering services through global delivery models with diverse talent, disparate processes, etc. All this has made assessing an organization’s process maturity increasingly difficult. The existing maturity models in the marketplace are not flexible enough to accommodate for these complex delivery structures created through multi-vendor scenarios, multi-location engagements focusing on selective parts of the software development lifecycle, etc. This significantly reduces the overall effectiveness of the output provided by the existing maturity model and its applicability to the client situation.<|endoftext|>Focused on comprehensive certification rather than required capabilities Most conventional models are “certification focused” and can help organizations in assessing their IT process capabilities and getting certified. They are exhaustive in the coverage of process areas and answer the question, “how comprehensive are the processes and practices to service a diverse sets of users of the QA services?”. Such a certification is often a much needed qualification for IT service provider organizations to highlight their process capability and maturity to diverse clients and prospects. However, most non-IT businesses maintain IT divisions to support their business and are more interested in selectively developing the required capabilities of their respective IT groups, leading to efficient business processes and better business outcomes. Hence 1 External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
the focus of maturity assessments in these organizations is not certification, but the ability to deliver specific business outcomes. Since the traditional assessment models are often certification-focused, most non-IT businesses find it an overhead to go through an exhaustive assessment process that does not help them answer the question, “how effective are my organization’s QA processes and practices to ensure quality of my business outcomes? Staged Vs Continuous model for growth in maturity Majority of certification models follow a staged approach, which means that the organization has to satisfy all the requirements of a particular level and get certified in the same, before becoming eligible for progress to the next level. But, most organizations are selective in their focus and want to develop those areas that are relevant and necessary to their business, rather than meeting all the requirements just to get certified at a particular level. Because of the staged approach to certification, such maturity models do not present organizations with a good view of where their current capabilities stand with respect to what is needed by the organization.<|endoftext|>Lack of focus on QA The existing maturity models primarily focus on software development, and treat testing as a phase in the Software Development Lifecycle. However, today, testing has evolved as a mature and specialized discipline in the software industry and hence the ability of the traditional models to assess the QA/testing processes and practices to the required level of detail is very limited. They fall short of organizations that have realized the need/ importance for an independent testing team and want to manage the QA maturity mapping process as an independent entity. Hence, the various dimensions of the test organization should be given adequate focus in the maturity assessment approach covering the Process, People and Technology aspects of testing.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
A comprehensive model to assessing an organization’s test capabilities and the ability to handle transformational programs Now that we have looked at the shortcomings of the traditional models of QA assessment, it is time to answer that all-important question, “what should a comprehensive model for assessing QA/ Test maturity be like?”. The key attributes of a comprehensive QA/Test assessment framework/model can be summed up as follows: Provide business-comprehensible decision-aiding results The model should allow for selective assessment of the relevant parameters for maturity, in the context of business. The results of the assessment should help the business identify and plot the possibility of immaturity in their systems and processes, using lead indicators that have a negative impact on the business. These indicators should help the senior management to decide whether to go for a detailed assessment of maturity, before any adverse effect on business is felt. Choice of business-relevant factors and focus areas The model should be flexible enough to provide the right level of focus on the various factors, business deems relevant, that contribute to the overall maturity index. For example, an organization which depends on one or a set of service providers for their key |
Continue # Infosys Whitepaper
IT services may want strong governance and gating mechanisms. While, another organization that does testing in-house, and leverages vendors for development, will have a much wider focus on maturity in processes and practices. Basically, the model should be flexible enough to account for the intent of assessment, as outlined by the organization.<|endoftext|>Detailed and comprehensive view of areas of improvement and strengths The model should also be one that helps determine the maturity of the testing organization in a detailed manner. The methods and the systems of the model should provide a robust mechanism of objectively calculating the maturity level of the testing organization, based on the behaviors exhibited by the organization. It should provide the members of the QA organization with a detailed view of the areas of strength (and hence to be retained) and the areas of improvement. The model should enable the testing organization to understand the measures that should be implemented at the granular level, rather than at a high-level and thereby help the organization to focus on their key QA dimensions, 2 External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
and strengthen the maturity of these dimensions.<|endoftext|>A frame of reference for improvement initiatives The comprehensive maturity model should provide the organization with a roadmap to move its QA/Testing processes and practices to a higher level of maturity and effectiveness. It should provide a reference framework for selective improvement of capabilities, keeping in mind the business context and organizational objectives. This will help the organization design a roadmap for improvement and devise ways to implement the same effectively.<|endoftext|>Conclusion So, in order to meet the needs of a dynamic business environment and rapidly evolving technology space, IT organizations need to respond quickly and efficiently with high-quality, high- reliability and cost-effective processes and systems. This calls for a robust and scalable QA organization that can guard and ensure the quality of solutions that are put into operation, and assess itself on its capabilities and maturity, periodically to ensure business-relevance and effectiveness.<|endoftext|>Hence a comprehensive QA maturity model, which assists organizations in this assessment, should move away from certification-based models with “generic” and “hard-to-customize” stages, to a model that is adaptable to the context in which business operates. It needs to be a model that evaluates factors that influence maturity and quality of processes at a detailed level and helps the organization to embed quality and maturity in processes, governance and development of key competencies. This would help ensure the maturity of operations and promote continuous improvement and innovation throughout the organization.<|endoftext|>3 External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
About the
Authors Reghunath Balaraman (Reghunath_Balaraman@infosys.com) Reghunath Balaraman is a Principal Consultant, and has over 16 years of experience. A post graduate in Engineering and Management, he has been working closely with several large organizations to assess the maturity of their test and QA organizations and to help them build mature and scalable QA organizations. Raghunath is also well versed with several industry models for assessing maturity of software testing.<|endoftext|>Harish Krishnankutty (Harish_T@infosys.com) Harish Krishnankutty is a Industry Principal in Infosys’ Independent Validation Solutions unit and has over 14 years of experience in the IT industry. His area of specialization is QA consulting and program management and has extensive expertise in the design and implementation of Testing Centers of Excellence/ Managed QA services. Currently, he focuses on the development of tools, IP and services in different areas of specialized testing, including Test Automation, SOA Testing, Security Testing, User Experience Testing, Data Warehouse Testing and Test Data Management.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: DAST Automation for Secure, Swift DevSecOps Cloud Releases
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
Abstract DevSecOps adoption in the cloud goes well beyond merely managing continuous integration and continuous deployment (CI/CD) cycles. Its primary focus is security automation. This white paper examines the barriers organizations face when they begin their DevSecOps journey, and beyond. It highlights one of the crucial stages of security testing known as Dynamic Application Security Testing (DAST). It explores the challenges and advantages of effectively integrating DAST into the CI/ CD pipeline, on-premises and in the cloud. The paper delineates the best practices for DAST tool selection and chain set-up, which assist in shift-left testing and cloud security workflows that offer efficient security validation of deployments with risk- based prompt responses.<|endoftext|>DAST AUTOMATION FOR SECURE, SWIFT DEVSECOPS CLOUD RELEASES WHITE PAPER
---
Page: 2 / 8
---
Traditional security practices involve security personnel running tests, reviewing findings, and providing developers with recommendations for modifications. This process, including threat modeling, conducting compliance checks, and carrying out architectural risk analysis and management, is time-consuming and incongruous with the speed of DevOps. Some of these practices are challenging to automate, leading to a security and DevOps imbalance. To overcome these challenges, many organizations have shifted to an agile DevOps delivery model. However, this exerts significant pressure on DevOps to achieve speed with security as part of the CI/CD pipeline. As a result, release timelines and quality have been impacted due to the absence of important security checks or the deployment of vulnerable code under time pressure.<|endoftext|>Even as DevOps was evolving, the industry concurrently fast- tracked its cloud transformation roadmap. Most organizations shifted their focus to delivering highly scalable applications built on customized modern architectures with 24/7 digital services. These applications include a wide-ranging stack of advanced tiers, technologies, and microservices, backed by leading cloud platforms such as AWS, GCP, and Azure. Despite the accelerated digital transformations, a large number of organizations continue to harbor concerns about security. The year- end cybercrime statistics provide good reason to do so: 1. The global average cost of a data breach is an estimated US $4.35 million, as per IBM’s 2022 data breach report1 2. Cybercrime cost the world US $7 trillion in 2022 and is set to reach US $10.5 trillion by 2025, according to Cybersecurity Ventures2 Evidently, security is an important consideration in cloud migration planning. Speed and agility are imperatives while introducing security to DevOps processes. Integrating automated security checks directly into the CI/CD pipeline enables DevOps to evolve into DevSecOps. DevSecOps is a flexible collaboration between development, security, and IT operations. It integrates security principles and practices into the DevOps life cycle to accelerate application releases securely and confidently. Moreover, it adds value to business by reducing cost, improving the scope for innovation, speeding recovery, and implementing security by design. Studies project DevSecOps to reach a market size of between US $20 billion to US $40 billion by the end of 2030.<|endoftext|>Background
---
Page: 3 / 8
---
As enterprises race to get on the DevSecOps bandwagon, IT teams continue to experience issues: • 60% find DevSecOps technically challenging 3 • 38% report a lack of education and adequate skills around DevSecOps 3 • 94% of security and 93% of development teams report an impact from talent shortage 1 Some of the typical challenges that IT teams face when integrating security into DevOps on-premise or in the cloud are: People/culture challenges: • Lack of awareness among developers on secure coding practices and processes • Want of collaboration and cohesive skillful teams with development, operations, and security experts Process challenges: • Security and compliance remain postscript • Inability to fully automate traditional manual security practices to integrate into DevSecOps • Continuous security assessments without manual intervention Tools/technology challenges: • Tool selection, complexity, and integration problems • Configuration management issues • Prolonged code scanning and consumption of resources DevSecOps implementation challenges Focusing on each phase of the modern software development life cycle (SDLC) can help strategically resolve DevSecOps implementation challenges arising from people, processes, and technology. Integrating different types of security testing for each stage can help overcome the issues more effectively (Figure 1). Figure 1: Modern SDLC with DevSecOps and Types of Security Testing Solution PLAN Requirements CODE Code Repository BUILD CI Server TEST Integration Testing RELEASE Artifact Repository DEPLOY CD Orchestration OPERATE Monitor Threat Modelling Software Composition Analysis and Secret Management Secure Code Analysis and Docker Linting Dynamic Application Security Testing Network Vulnerability Assessments System/Cloud Hardening Cloud Configuration Reviews
---
Page: 4 / 8
---
What is DAST? DAST is the technique of identifying the vulnerabilities and touchpoints of an application while it is running. DAST is easy even for beginners to get started on without in-depth coding experience. However, DAST requires a subject matter expert (SME) in the area of security to configure and set up the tool. An SME with good spidering techniques can build rules and configure the correct filters to ensure better coverage, improve the effectiveness of the DAST scan, and reduce false positives.<|endoftext|>Best practices to integrate DAST with CI/CD The last few years have shown that next-generation CX requires heavy doses of perseverance and attitudinal focus. At Infosys, we have extended this to the way we deliver projects by relying on a few key cultural principles: Besides adopting best practices, the CI/CD environment needs to be test-ready. A basic test set-up includes: There can be several alternatives to the set-up based on the toolset selection. The following diagram depicts a sample (see Figure 2).<|endoftext|>Figure 2: DevSecOps Lab Set-up • Integrate DAST scan in the CI/CD production pipeline after provisioning the essential compute resources, knowing that the scan will take under 15 minutes to complete. If not, create a separate pipeline in a non-production environment • Create separate jobs for each test in the case of large applications. E.g., SQL injection and XSS, among others • Consider onboarding an SME with expertise in spidering techniques, as the value created through scans is directly proportional to the skills exhibited • Roll out security tools in phases based on usage, from elementary to advanced • Fail builds that report critical or high-severity issues • Save time building test scripts from scratch by leveraging existing scripts from the functional automation team • Provide links to knowledge pages in the scan outputs for additional assistance • Pick tools that provide APIs • Keep the framework simple and modular • Control the scope and false positives locally instead of maintaining a central database • Adopt the everything-as-a-code strategy as it is easy to maintain Developer machine for testing locally Code repository for version controlling CI/CD server for integrations and running tests with the help of slave/runner Staging environment
---
Page: 5 / 8
---
Right tool selection With its heavy reliance on tools, DevSecOps enables the automation of engineering processes, such as making security testing repeatable, increasing testing speed, and providing early qualitative feedback on application security. Therefore, selecting the appropriate security testing tools for specific types of security testing and applying the correct configuration in the CI/CD pipeline is critical.<|endoftext|>Challenges in tool selection and best practices Common pitfalls • Lack of standards in tool selection • Security issues from tool complexity and integration • Inadequate training, skills, and documentation • Configuration challenges Best practices in tool selection • Expert coverage of tool standards • Essential documentation and security support • Potential for optimal tool performance, including language coverage, open source or commercial options, the ability to ignore issues, incident severity categories, failure on issues, and results reporting feature • Cloud technology support • Availability of customization and integration capabilities with other tools in the toolchain • Continuous vulnerability assessment capability Best practices in tool implementation • Create an enhanced set of customized rules for tools to ensure optimum scans, and reliable outcomes • Plan incremental scans to reduce the overall time taken • Use artificial intelligence (AI) capabilities to optimize the analysis of vulnerabilities reported by tools • Aim for zero-touch automation • Consider built-in quality through automated gating of the build against the desired security standards After selecting the CI/CD and DAST tools, the next step is to |
Continue # Infosys Whitepaper
set up a pre-production or staging environment and deploy the web application. The set-up enables DAST to run in the CI/CD pipeline as a part of integration testing. Let us consider an example using the widely available open-source DAST tool, Zed Attack Proxy (ZAP). Some of the key considerations for integrating DAST in the CI/CD pipeline using ZAP (see Figure 3) are listed below: • Test on the developer machine before moving the code to the CI/CD server and the Gitlab CI/CD • Set up the CI/CD server and Gitlab. Ensure ZAP container readiness with Selenium on Firefox, along with custom scripts • Reuse the functional automation scripts, only modifying them for security testing use cases and data requirements • Push all the custom scripts to the Git server and pull the latest code. Run the pipeline after meeting all prerequisites
---
Page: 6 / 8
---
Some of the key considerations for integrating DAST in the CI/CD pipeline using ZAP (see Figure 3) are listed below: • Test on the developer machine before moving the code to the CI/CD server and the Gitlab CI/CD • Set up the CI/CD server and Gitlab. Ensure ZAP container readiness with Selenium on Firefox, along with custom scripts • Reuse the functional automation scripts, only modifying them for security testing use cases and data requirements • Push all the custom scripts to the Git server and pull the latest code. Run the pipeline after meeting all prerequisites
---
Page: 7 / 8
---
DevSecOps with DAST in the cloud Integrating DAST with cloud CI/CD requires a different approach. Approach: • Identify, leverage, and integrate cloud-native CI/CD services, continuous logging and monitoring services, auditing, and governance services, as well as operation services with regular CI/CD tools – mainly DAST • Control all CI/CD jobs with server and slave architecture by using containers, such as Docker, to build and deploy applications as cloud orchestration tools.<|endoftext|>An effective DAST DevSecOps in cloud architecture appears as shown in Figure 4: Best practices • Control access to pipeline resources using identity and access management (IAM) roles and security policies • Encrypt data at transit and rest always • Store sensitive information, such as API tokens and passwords, in the Secrets Manager Key steps 1. The user commits the code to a code repository 2. The tool builds artifacts and uploads them to the artifact library 3. Integrated tools help perform the SCA and SAST tests 4. Reports of critical/high-failure vulnerabilities from the SCA and SAST scans go to the security dashboard for fixing 5. Code deployment to the staging environment takes place if reports indicate “no or ignore vulnerabilities” 6. Successful deployment triggers a DAST tool, such as the OWASP ZAP, for scanning 7. User repeats steps 4 to 6 in the event of a vulnerability detection 8. If no vulnerabilities are reported, the workflow triggers an approval email.<|endoftext|>9. Receipt of approval schedules automatic deployment to production Figure 4: DAST DevSecOps in Cloud Workflow
---
Page: 8 / 8
---
© 2023 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected About the authors Kedar J Mankar Kedar J Mankar is an Infosys global delivery lead for Cyber Security testing with Infosys. He has extensive experience across different software testing types. He has led large size delivery and transformation programs for global Fortune 500 customers and delivered value through different COEs with innovation at core. He has experience working and handling teams in functional, data, automation, DevOps, performance and security testing across multiple geographies and verticals.<|endoftext|>Amlan Sahoo Amlan Sahoo has an overall 27+ years in IT industry in application development and testing. He is currently the head of Cyber Security testing division. He has a proven track record in managing and leading transformation programs with large teams for Fortune 50 clients, managing deliveries across multiple geographies and verticals. He also has 4 IEEE and 1 IASTED publications to his credit on bringing efficiencies in heterogeneous software architectures.<|endoftext|>Vamsi Kishore Vamsi Kishore Sukla is a Security consultant with over 8 years of professional experience in the security field, specializing in application security testing, cloud security testing, network vulnerability assessments following OWASP standards and CIS benchmarks. With a deep understanding of the latest security trends and tools, he provides comprehensive security solutions to ensure the safety and integrity of organization and clients.<|endoftext|>Conclusion DevOps is becoming a reality much faster than we anticipate. However, there should be no compromise on security testing to avoid delayed deployments and the risk of releasing software with security vulnerabilities. Successful DevSecOps requires integrating security at every stage of DevOps, enabling DevOps teams on security characteristics, enhancing the partnership between DevOps teams and security SMEs, automating security testing to the extent possible, and shift-left security for early feedback. By leveraging the best practices recommended in this paper, organizations can achieve a more secure and faster release by as much as 15%, both on-premises and in the cloud.<|endoftext|>References 1. https://www.cobalt.io/blog/cybersecurity-statistics-2023 2. https://cybersecurityventures.com/boardroom-cybersecurity-report/ 3. https://strongdm.com/blog/devsecops-statistics
***
|
# Infosys Whitepaper
Title: Data Archival Testing
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 4
---
WHITE PAPER DATA ARCHIVAL TESTING Abstract Today, there is an exponential rise in the amount of data being generated by organizations. This explosion of data increases IT infrastructure needs and has an immense impact on some important business decisions that are dependent on proficient data analytics. These challenges have made data archival extremely important from a data management perspective. Data archival testing is becoming increasingly important for businesses as it helps address these challenges, validate the accuracy and quality of archived data and improve the performance of related applications. The paper is aimed at helping readers better understand the space of data archival testing, its implementation and the associated benefits.<|endoftext|>
---
Page: 2 / 4
---
Introduction One of the most important aspects of managing a business today is managing its data growth. On a daily basis, the cost of data management outpaces the data storage costs for most organizations. Operational analytics and business intelligence reporting usually require active operational data. Data that does not have any current requirement or usage, known as inactive data, can be archived to a safe and secure storage. Data archiving becomes important for companies who want to manage their data growth, without compromising on the quality of data that resides in their production systems.<|endoftext|>Many CIOs and CTOs are reworking on their data retention policies and their data archival and data retrieval strategies because of an increased demand for data storage, reduced application performance and the need to be compliant with the ever changing legislations and regulations.1 Data Archival Testing – Test Planning Data Archival is the process of moving data that is not required for operational, analytical or reporting purposes to offline storage. A data retrieval mechanism is developed to restore data from the offline storage.<|endoftext|>The common challenges faced during data archival are: • Inaccurate or irrelevant data in data archives • Difficulty in the data retrieval process from the data archives Data archival testing helps address these challenges. While devising the data archival test plan, the following factors need to be taken into consideration: Data Dependencies There are many intricate data dependencies in an enterprise’s architecture. The data which is archived should include the complete business objects along with metadata, that helps retain the referential integrity of data across related tables and applications. Data archival testing needs to validate that all related data is archived together for easy interpretation, during storage and retrieval. Data Encoding The encoding of data in the archival database depends on the underlying hardware for certain types of data. To illustrate, data archival testing needs to ensure that the encoding of numerical fields such as integers also archives the related hardware information, for easier future data retrieval and display of data with a different set of hardware.<|endoftext|>Data Retrieval Data needs to be retrieved from archives for regulatory, legal and business needs. The validation of the data retrieval process ensures that the archived data is easily accessed, retrieved and displayed in a format which can be clearly interpreted without any time consuming manual intervention.<|endoftext|>Data Archival Testing – Implementation The data archival testing process includes validating processes which encompass data archival, data deletion and data retrieval. Figure 1 below describes the different stages of a data archival testing process, the business drivers, the different types of data that can be archived and the various offline storage modes. Figure 1: The Data Archival Testing Process External Document © 2018 Infosys Limited
---
Page: 3 / 4
---
Test the Data Archival process 1 • Testing the Data Archival process ensures that business entities that are archived includes master data, transaction data, meta data and reference data • Validates the storage mechanism and that the archived data is stored in the correct format. The data also has to be tested for hardware independence Test the Data Deletion process 2 • Inactive data needs to be archived and moved to a secure storage for retrieval at a later point and then deleted from all active applications using it. This validation process would verify that that the test data deletion process has not caused any error to any existing applications and dashboards • When archived data is deleted from systems, verify that the applications and reports are conforming to their performance requirements Test the Data Retrieval process 3 • Data that has been archived needs to be easily identified and accessible in case of any legal or business needs • For scenarios that involve urgent data retrievals, processes for the same need to be validated within a defined time period Benefits of Data Archival Testing The benefits of data archival testing are often interrelated and have a significant impact on the IT infrastructure costs for a business. Some of the benefits are: Accomplishing all these benefits determine the success of a data archival test strategy.<|endoftext|>Reduced storage costs Improved application performance Minimize business outages Data Compliance Only the data that is relevant gets archived and for a defined time period which reduces hardware costs and its maintenance costs significantly.<|endoftext|>Data is retrieved faster; the network performs better as only relevant data is present in the production environment. All these factors enhance application performance.<|endoftext|>Archived data that is deleted from production systems does not have an impact on the related applications’ performance and functionality, leading to smooth business operations.<|endoftext|>Easy retrieval and availability of archived data ensures higher data compliance with the legal and regulatory requirements.<|endoftext|>Conclusion Due to the critical business needs for data retention, regulatory and compliance requirements and a cost effective way to access archived data, many businesses have started realizing and adopting data archival testing. Therefore an organization’s comprehensive test strategy needs to include a data archival test strategy which facilitates smooth business operations, ensures fulfillment of all data requirements, maintains data quality and reduces infrastructure costs.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 4 / 4
---
About the
Author Naju D. Mohan Naju is a Group Project Manager with Infosys with about 15 years of IT experience. She is currently managing specialized testing services like SOA testing, Data Warehouse testing and Test Data Management for many leading clients in the retail sector.<|endoftext|>REFERENCES 1. ‘Data overload puts UK retail sector under pressure’, Continuity Central, February 2009 2. ‘ Data Archiving, Purging and Retrieval Methods for Enterprises’, Database Journal, January 2011 © 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys POV
Title: Data Imperatives in IT MA&D in Life Sciences Industry
Author: Infosys Consulting
Format: PDF 1.7
---
Page: 1 / 10
---
An Infosys Consulting Perspective Consulting@Infosys.com | InfosysConsultingInsights.com DATA IMPERATIVES IN IT MA&D IN LIFE SCIENCES INDUSTRY
---
Page: 2 / 10
---
2 FOREWORD Larger macroeconomic headwinds (first the pandemic, then rising interest rates and now recessionary fears) are pushing organizations to resort to mergers, acquisitions, and divestitures (MA&D) as a strategic lever to achieve higher market share, acquire new capabilities, or/and refocus strategy on core business to improve financial performance. The average annual global MA&D value was approximately $3.6 trillion in 2011-20 cycle and increased to $5.9 trillion in 2021, highlighting the growing importance of MA&D in meeting future business needs. The life sciences industry is increasingly looking at MA&Ds to acquire new specialty / generic drug line (and related market pipeline) in pharmaceuticals, specialized capabilities in diagnostics and digital health sector, niche research and development capabilities for effective drug discovery around “specialty drugs” and patented IP data around experimental drugs.<|endoftext|>There is growing emphasis on antitrust regulations, regulatory reporting, disclosure requirements, and overall deal approval processes. Compliance with these directly relates to the way entity data is managed (before and after MA&D transaction). Multiple data types including financial, operational, people, supplier, and customer data come into remit. This requires organizations to carefully design and execute their data strategy. There are multiple examples from the industry which showcase that despite growing importance of data strategy in MA&D transactions, just 24% of organizations included CIOs in pre-merger planning4. Abbott Laboratories’ acquisition of Alere got delayed due to regulatory concerns on market concentration1 and anti-competition2. Pfizer and Allergan terminated their planned merger due to the change in treasury rules that made tax benefits less attractive3 and many more.<|endoftext|>Data strategy design and execution start with definition of business metrics and alignment on value measurement approach. After metrics are defined and accepted, linkage to source systems, standardization of data element definitions and management of meta-data along with master data ownerships are key to accurately measuring and interpreting these metrics. Data qualification, especially in regulated industries, is critical to understand and managing qualified data (GxP) and related platforms & applications involved. Finally, a performance oriented and scalable data integration methodology followed by an overarching process and governance mechanism is necessary for ensuring ongoing quality and compliance.<|endoftext|>A poorly designed data strategy and execution often leads to ambiguous understanding of key metrics and underlying data elements, incongruent data standards and unclear ownerships - resulting in faulty data integration, inaccurate transaction records, and ultimately unreliable insights and legal complications. An effective way to overcome these pitfalls is to define a robust data design and execution strategy covering key elements addressing distinctive needs of the life sciences industry.<|endoftext|>Data Imperatives in IT MA&D in Life Sciences Industry | © 2023 Infosys Consulting
---
Page: 3 / 10
---
Mergers, acquisitions, and divestitures (MA&D) are strategic channels for growth. Multiple benefits can be achieved through an effective MA&D transaction, including exponential growth, entry to newmarkets, optimized cost savings, and improved competitiveness.<|endoftext|>The life sciences industry has been experiencing rapid growth and transformation in recent years, fueled by innovations in R&D, regulatory changes, and technological advancements in provider and payer domains. MA&D transactions have become a vital strategic tool for organizations to expand their portfolios, access new markets and improve their competitive positions. Given the complex nature of MA&D transactions, it requires comprehensive due diligence and planning before, during, and after the transaction. Data, being the fundamental building block of any organization, is a critical factor in this due diligence and planning. It is also one of the commonly overlooked factors. In this article, we highlight key elements of data strategy and design within a MA&D transaction and typical pitfalls along with ways to overcome them.<|endoftext|>MA&D transactions in the life sciences industry are increasingly subject to higher scrutiny from regulatory bodies to ensure greater transparencies and better shareholder and consumer protection. There are three key regulation types which are in place: 1. Greater financial and operationaltransparencies: A. India - Foreign Exchange Management Act (FEMA), SEBI Laws B. USA – Securities Act, Securities Exchange Act C. Europe – European Union Merger Law 2. Better intellectual property protection: Patents, trademarks, copyrights, trade secrets, designs, data protection.<|endoftext|>3. Higher fair play and consumer protection: A. India – Competition Act B. USA – Federal Antitrust Laws C. Europe – Competition Law According to an analyst report, the average MA&D failure rate is ~70%4. A key reason for this high failure rate is difficulty in integrating the two entities,5 especially w.r.t culture, operational ways of working, revenue recognition and performance incentives. All these aspects are directly impacted by the way data is designed and managed. Despite the importance, just 24% of organizations included CIOs in pre-merger planning3. Effective data management is key to adhering to these regulatory requirements and ensuring that data is properly collected, analyzed, and reported throughout the transaction process.<|endoftext|>Introduction 3 Data Imperatives in IT MA&D in Life Sciences Industry | © 2023 Infosys Consulting
---
Page: 4 / 10
---
MA&Ds are fundamentally complex transactions that impact business entities, systems, processes, and data of the organizations involved. There are eight elements which underpin data strategy and execution.<|endoftext|>Fig 1 – Key elements of data strategy within a MA&D transaction. 1. Business metrics and measurement: Defining metrics to evaluate the performance of the target entity is critical. It is important that all entities involved in the transaction clearly define and agree upon the metrics which define success; noteworthy metrics in the life sciences industry include clinical trial outcomes, regulatory approval timelines, molecule discovery rates, drug pipeline progress and GxP compliance metrics. These metrics articulate objectives and key results of the target entity. Agreement on accurate metrics and their measurement logic improves operational and financial transparencies, thereby promoting adoption of the integration / divestiture decision.<|endoftext|>2. Data policies and standards: It is essential to establish a common set of data standards and policies to maintain data assets in the target environment. This involves defining standard data formats, structures, and rules for data management and establishing governance policies to ensure security, privacy, compliance, and protection of data. In a merger or a divestiture scenario, data policies for resulting entities are driven by target business needs and operational requirements.<|endoftext|>Key elements of data strategy and execution 4 Data Imperatives in IT MA&D in Life Sciences Industry | © 2023 Infosys Consulting
---
Page: 5 / 10
---
3. Metadata management: Metadata helps to classify, manage, and interpret master data.<|endoftext|>Managing metadata is essential in ensuring standardization of data elements across systems e.g., customer ID, distribution channel codes, clinical trial identifiers, drug classification codes, etc.<|endoftext|>Effective metadata management promotes improved data consistency, better data quality, governance, compliance, and security. Like data policies and standards, metadata management standards are driven by target business needs and operational requirements.<|endoftext|>4. Master data management: Data ownership is crucial in MA&Ds because it determines necessary accountabilities and responsibilities towards maintaining the data assets.<|endoftext|>Defining master data ownership during the pre/post-close phases is critical in ensuring smooth transition to integrated operations6. All parties involved must align on a clear ownership on gaining access, maintaining, and governing the master data assets after the transaction. Establishing data stewardship roles and processes to maintain master data is essential to avoid pitfalls such as delays in integration, legal disputes, and potential regulatory penalties. In addition, clear data ownership contributes to better intellectual property protection in a MA&D transaction. This ownership also means managing data at a product level with a promise of a required level of data quality, making it easier for users to extract valuable insights and intelligence.<|endoftext|>5. Data lineage management: MA&D transactions create large data assets, which increasingly become interconnected, complex, and challenging to work with. Data lineage tracks flow of data from source to destination, noting any changes in its journey across different systems. This allows for tracing data origins, evaluating data accuracy and pinpointing potential risks, enabling risk management, thus elevating probability of success of MA&D transactions.<|endoftext|>6. Data qualification: A crucial element for consideration is qualification of data into GxP and non-GxP. GxP data is subject to stringent regulations, while non-GxP data has fewer regulatory constraints. Proper data qualification enables organizations to manage GxP data in compliance with regulatory guidelines and handle non-GxP data as appropriate for its intended use. This helps in adoption of efficient data management processes especially from an extract, transform and load perspective. It also emphasizes relevance of systems that will hold the regulatory |
Continue # Infosys POV
data thus ensuring required controls in place when interacting with such systems.<|endoftext|>5 Data Imperatives in IT MA&D in Life Sciences Industry | © 2023 Infosys Consulting
---
Page: 6 / 10
---
7. Data integration: Effective integration of data across systems such as clinical trial databases, product development pipelines, and sales and marketing platforms into a single, unified environment is critical for the new entity to make effective decisions. Integration of data requires consistent understanding of data and minimization of data redundancies. This helps the new entity gain better and more accurate understanding of its business and operational data, thereby expediting envisioned synergy realization. It also increases operational efficiency by streamlining internal processes, reducing duplication of effort, thereby improving risk profile. Effective data integration is essential for achieving information protection and transparency in a MA&D transaction.<|endoftext|>8. Data governance: Data governance is a crucial element for managing “data at rest” and “data in motion”. Robust data governance establishes policies, processes and controls to manage data throughout the life cycle. An effective data governance framework ensures both “data in motion” and “data at rest” are adequately protected while tracking data health in a near real time manner, thereby fostering trust with regulators, customers, and partners.<|endoftext|>Common pitfalls in a MA&D and ways to overcome them Data design and execution to support an integration / divestiture transaction is often complicated and stressful. However, with the right interventions, organizations can navigate around these complications. A non-effective data strategy can have far-reaching consequences, such as reduced financial and operational transparency, compromised intellectual property protection, decreased fair play among entities involved and weakened consumer protection. We have identified sixcommon pitfalls and their impact.<|endoftext|>Fig 2 – Critical elements of pitfalls in a MA&D transaction 6 Data Imperatives in IT MA&D in Life Sciences Industry | © 2023 Infosys Consulting
---
Page: 7 / 10
---
1. Data ownership: One of the most common pitfalls in MA&D transactions is limited clarity around ownership of data assets in target state. The issue is particularly pronounced when organizations involved have multiple focus areas with data stored in a single system but without proper segregation and ownership. For example, an organization may have three focus areas such as BioSimilars, BioPharma and Med Devices. Data on these focus areas may be stored in one system but not segregated based on focus areas. MA&D in any one of these areas will impose a significant challenge in terms of data segregation dependency identification. Ambiguities regarding data asset ownership often leads to intellectual property disputes, faulty data integration, and challenges extracting data specific to a new entity. To avoid such confusion, it is essential to establish data ownership early in the transaction and assign data stewards to manage data in rest as well in motion.<|endoftext|>2. Business metrics: Organizations involved in a MA&D transaction may prioritize select GxP and non-GxP metrics based on their distinctive strategic objectives and market priorities. For example, in a MA&D involving a generic and a specialty drug maker, the generic drug maker might emphasize GxP metrics such as manufacturing quality and regulatory submission timelines, as well as non-GxP metrics such as market share and cost efficiency. On the other hand, specialty drug makers might focus on GxP metrics such as clinical trial data quality and patient safety, and non-GxP metrics such as R&D pipeline growth and innovative therapy development. Given these diverse priorities, establishing common performance criteria for the new entity might be a challenge. Moreover, lack of uniformity in underlying logic for measuring the performance may further exacerbate the issue. Organizations must establish uniform metrics and underlying measurement criteria that is reflective of strategic priorities of the target entity.<|endoftext|>3. Data standards: Organizations also face roadblocks when they fail to establish common definitions for data elements. The resulting inconsistency in data standards increases the risk of inaccurate transaction records. Such inaccuracies can impair decision-making during critical stages of the MA&D and might even jeopardize the overall success of the transaction. Creating a unified data dictionary and standardizing data definitions across all entities involved is essential to mitigate such risks.<|endoftext|>4. Data lineage: A common pitfall is related to replication of source data elements across multiple source systems. Replication of data elements in multiple systems increases complexity of managing data and leads to additional synchronization overheads.<|endoftext|>Establishing standardized data lineage practices along with synchronized replication processes through automated tools is key to increasing data congruency.<|endoftext|>7 Data Imperatives in IT MA&D in Life Sciences Industry | © 2023 Infosys Consulting
---
Page: 8 / 10
---
5. Data governance: Another common challenge encountered during MA&Ds arises from ineffective and inconsistent data governance processes. Inconsistent data governance processes decrease accuracy of inferences and insights which can be derived from datasets. A consistent data governance process ensures data protection and regulatory compliance.<|endoftext|>6. Knowledge management: Heavy reliance on individuals makes knowledge retention vulnerable to personnel changes. To overcome this challenge, organizations must develop a knowledge management capability that is not solely dependent on people but facilitated through a set of processes and tools. A robust knowledge management capability enables effective and efficient use of data during the transaction.<|endoftext|>7. A well-designed data strategy is complemented by an effective execution plan. By proactively identifying potential challenges and implementing mitigating solutions, organizations can effectively navigate through the complexities and maximize value realization from a MA&D transaction. Effective data strategy and execution can safeguard the success of the transaction and ensure that the resulting entity(s) operates efficiently and effectively.<|endoftext|>About the CIO advisory practice at Infosys Consulting Over the next 5 years CIOs will lead their organizations towards fundamentally new ways of doing business. The CIO Advisory practice at Infosys Consulting is helping organizations all over the world transform their operating model to succeed in the new normal – scaling up digitization and cloud transformation programs, optimizing costs, and accelerating value realization. Our solutions focus on the big-ticket value items on the C-suite agenda, providing a deep link between business and IT to help you lead with influence.<|endoftext|>8 Data Imperatives in IT MA&D in Life Sciences Industry | © 2023 Infosys Consulting
---
Page: 9 / 10
---
MEET THE AUTHORS Inder Neel Dua inder_dua@infosys.com Inder is a Partner with Infosys Consulting and leads the life sciences practice in India. He has enabled large scale programs in the areas of digital transformation, process re-engineering and managed services.<|endoftext|>Anurag Sehgal anurag.sehgal@infosys.com Anurag is an Associate Partner with Infosys Consulting and leads the CIO advisory practice in India. He has enabled large and medium scale clients to deliver sustainable results from multiple IT transformation initiatives.<|endoftext|>Ayan Saha ayan.saha@infosys.com Ayan is a Principal with the CIO advisory practice in Infosys Consulting. He has helped clients on business transformation initiatives focusing on IT M&A including operating model transformation.<|endoftext|>Manu A R manu.ramaswamy@infosys.com Manu is a Senior Consultant with CIO advisory practice in Infosys Consulting.<|endoftext|>He has assisted clients on technology transformation initiatives in the areas of IT M&A and cloud transformation.<|endoftext|>Sambit Choudhury sambit.choudhury@infosys.com Sambit is a Senior Consultant with the CIO advisory practice in Infosys Consulting. His primary focus areas include enterprise transformation with IT M&A as a lever. He has helped clients in areas of IT due diligence, integration, and divestitures.<|endoftext|>1 FTC Requires Abbott Laboratories to Divest Two Types of Point-Of-Care Medical Testing Devices as Condition of Acquiring Alere Inc.<|endoftext|>2 EU clears Abbott acquisition of Alere subject to divestments | Reuters 3 Pfizer formally abandons $160bn Allergan deal after US tax inversion clampdown | Pharmaceuticals industry | The Guardian 4 Why, and when, CIOs deserve a seat at the M&A negotiating table | CIO 4 The New M&A Playbook - Article - Faculty & Research - Harvard Business School (hbs.edu) 5 Don’t Make This Common M&A Mistake (hbr.org) 6 6 ways to improve data management and interim operational reporting during an M&A transaction 9 Data Imperatives in IT MA&D in Life Sciences Industry | © 2023 Infosys Consulting
---
Page: 10 / 10
---
consulting@Infosys.com InfosysConsultingInsights.com LinkedIn: /company/infosysconsulting Twitter: @infosysconsltng About Infosys Consulting Infosys Consulting is a global management consulting firm helping some of the world’s most recognizable brands transform and innovate. Our consultants are industry experts that lead complex change agendas driven by disruptive technology. With offices in 20 countries and backed by the power of the global Infosys brand, our teams help the C- suite navigate today’s digital landscape to win market share and create shareholder value for lasting competitive advantage. To see our ideas in action, or to join a new type of |
Continue # Infosys POV
consulting firm, visit us at www.InfosysConsultingInsights.com. For more information, contact consulting@infosys.com © 2023 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names, and other such intellectual property rights mentioned in this document. Except as expressly permitted, neither this document nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printed, photocopied, recorded or otherwise, without the prior permission of Infosys Limited and/or any named intellectual property rights holders under this document.
***
|
# Infosys Whitepaper
Title: Need for data masking in a data-centric world
Author: Infosys Limited
Format: PDF 1.6
---
Page: 1 / 4
---
VIEW POINT Abstract With data gaining increasing prominence as the foundation of organizational operations and business, ensuring data security is emerging as a main priority. It is critical to safeguard sensitive data and customer privacy, the lack of which can lead to financial and reputational losses. Thus, there is a rising demand to protect personally identifiable information during transfer within organizations as well as across the external ecosystem. This paper highlights the need for data masking solutions. It also explains how customized data masking solutions can be used in today’s data centric world.<|endoftext|>Paromita Shome, Senior Project Manager, Infosys Limited NEED FOR DATA MASKING IN A DATA-CENTRIC WORLD
---
Page: 2 / 4
---
External Document © 2018 Infosys Limited Introduction The key differentiator for today’s businesses is how they leverage data. Thus, ensuring data security is of utmost importance, particularly for organizations that deal with sensitive data. However, this can be challenging because data that is marked critical and sensitive often needs to be accessed by different departments within an organization. Without a well- defined enterprise-wide data access management strategy, securing data transfer can be difficult. The failure to properly control handling of sensitive information can lead to dangerous data breaches with far-reaching negative effects. For instance, a 2017 report by the Ponemon Institute titled ‘Cost of a Data Breach Study, 2017’1 found that: • The average consolidated total cost of a data breach is US $3.62 million • The average size of a data breach (number of records lost or stolen) increased by 1.8% in the past year • The average cost of a data breach is US $141 per record • Any incident – either in-house, through a third party or a combination of both – can attract penalties of US$19.30 per record. Thus, for a mere 100,000 records, the cost of a data breach can be as high as US $1.9 million These statistics indicate that the consequences of data breaches go beyond financial losses. They also affect the organization’s reputation, leading to loss of customer and stakeholder trust. Thus, it is imperative for organizations to adopt robust solutions that manage sensitive data to avert reputational damage and financial losses.<|endoftext|>External Document © 2019 Infosys Limited
---
Page: 3 / 4
---
External Document © 2018 Infosys Limited Data masking as a solution Data masking refers to hiding data such that sensitive information is not revealed. It can be used for various testing or development activities. The most common use cases for data masking are: • Ensuring compliance with stringent data regulations – Nowadays, there are many emerging protocols that mandate strict security compliance such as Health Insurance Portability and Accountability Act (HIPAA) and General Data Protection Regulation (GDPR). These norms do not allow organizations to transfer Types of data masking There are various masking models or algorithms that can be leveraged to address the above use cases. These ensure data integrity while adhering to masking demands. The most common types are: • Substitution or random replacement of data with substitute data • Shuffling or randomizing existing values personal information such as personally identifiable information (PII), payment card information (PCI) and personal health information (PHI) • Securely transferring data between project teams – With the increasing popularity of offshore models, project management teams are concerned about how data is shared for execution. For instance, sharing production data raises concerns about the risk of data being misused/mishandled during transition. Thus, project teams need to build an environment that closely mimics production environments and can be used for functionality validation. This requires hiding sensitive information when converting and executing production data It is important to note that data sensitivity varies across regions. Organizations with global operations are often governed by different laws. Hence, the demand for data security and the potential impact of any breach differ based on the operating regions. Thus, having an overarching data privacy strategy is paramount to ensure that sensitive data remains protected. This calls for a joint data protection strategy that includes vendors in offshore and near- shore models as well.<|endoftext|>vertically across a data set/column • Data encryption by replacing sensitive values with arithmetically formulated data and using an encryption key to view the data • Deleting the input data for sensitive fields and replacing with a null value to prevent visibility of the data element • Replacing the input value with another value in the lookup table While the above models enable straight- forward masking, they cannot be applied to all cases, thus creating the need for customized data masking. Customized data masking uses an indirect masking technique where certain business rules must be adhered to along with encryption as shown in Fig 1.<|endoftext|>In the figure, the source data – WBAPD11040WF70037 – is received from any source system like RDBMS/Flat files. The business rules state that: 1. There should be no change in the first 10 characters post masking 2. The next 2 letters should be substituted with letters post masking 3. The last 5 numerals should be substituted with numerals only post masking Fig 1: A technical approach to customized data masking External Document © 2019 Infosys Limited Data Masking RDBMS Individual Data (WBAPD11040WF70 037) Data Splitter Data Splits WBAPD11040 WF Individual Data (WBAPD11040AG817 26) Masked Data Source Data Data Concatenation 70037 Masking TDM Tool PLSQL WBAPD11040 AG 81726 Masked Data Splits Files Input Source RDBMS Files Output Source
---
Page: 4 / 4
---
© 2019 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected As part of customized masking, the source data is passed through a data splitter. The single source data is then split into individual source data based on the business rules as shown in Fig 1.<|endoftext|>After this, the individually-split data is run through the data masking tool and the masking algorithm defined in the tool is executed for each item, yielding an output of masked data. In each section, the masking type is selected based on the business rule and then the encryption is applied. The three individual sections of masked data are finally concatenated before being published at the output, which can be a database or a flat file. The masked data for the reference source data now reads as WBAPD11040AG81726. This output data still holds the validity of the source input data, but it is substituted with values that do not exist in line with the business rules. Hence, this can be utilized in any non-production environment.<|endoftext|>Customized masking can be used in various other scenarios such as: • To randomly generate a number to check Luhn’s algorithm where masked data ensures that the source data lies within the range of Luhn’s algorithum • To check number variance in a range between ‘x’ and ‘y’ where the input values will be replaced with a random value between the border values, and the decimal points are changed • To check number variance of around +/- * % where a random percentage value between defined borders will be added to the input value Conclusion As the demand for safeguarding sensitive data increases, organizations need effective solutions that support data masking capabilities. Two key areas where data masking is of prime importance are ensuring compliance with data regulations and protecting data while it is transferred to different environments during testing. While there are several readily-available tools for data masking, some datasets require specialized solutions. Customized data masking tools |
Continue # Infosys Whitepaper
can help organizations hide source data using encryption and business rules, allowing safe transfer while adhering to various global regulatory norms. This not only saves manual effort during testing but averts huge losses through financial penalties and reputational damages arising from data breaches.<|endoftext|>References 1. https://securityintelligence.com
***
|
# Infosys Whitepaper
Title: Infosys Test Automation Accelerator
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 4
---
WHITE PAPER HOW TO ENSURE DATA QUALITY DURING DATA MIGRATION Naju D Mohan, Delivery Manager
---
Page: 2 / 4
---
Introduction In today’s business world, change is the only constant and changes make their appearances in various forms, some of which are: • Mergers and acquisitions • New compliance requirements • New Package implementations • Migration to new technologies such as the cloud • Big data programs Being data driven, the business has to upgrade and keep its intelligence up- to-date to realize the benefits of these changes. So in short, all these changes result in data migrations. Most of the time, it is assumed that data migration is an IT problem 2. All the visible changes and the actions lie with the IT team, so the business moves on, putting the entire burden of data migration management on the IT team. Mergers and acquisitions and compliance requirements clearly stand out as having its origin with the business team. So does the decision to implement a CRM, loyalty or HR package with its beginning at the business department. The need to optimize operating costs and make intelligent decisions and act in real-time, leads the business to migrate to cloud and embark on big data programs. But the onus of the migration management, often, lies with the IT team.<|endoftext|>It must be clearly understood that any data migration without the business leading the program has a high rate of failure. Business has to not just care about data migration but command it.<|endoftext|>Who is the Primary Owner? According to Gartner, 83% of the data migration programs fail to meet expectations, running over time and budget1.<|endoftext|>Some key reasons for this are: 1. Poor Understanding About Data Migration Complexity • The focus on data migration is lost in the excitement of the new package implementation, migration to cloud or big data initiatives • Most often, it is assumed that data fits one-one into the new system • The whole attention is on the implementation of the new business processes with less or almost no focus on data migration 2. Lack of Proper Attention to Data • Lack of data governance and proper tools for data migration can impact the quality of data loaded into the new system • Mergers and acquisitions can introduce new data sources and diverse data formats • Huge volumes of data may force us to overlook whether the data is still relevant for the business 3. Late Identification of Risks • Poor data quality of the source systems and lack of documentation or inaccurate data models would be identified late in the migration cycle • Lack of clarity on the job flows and data integrity relationship across source systems would cause data load failures Why Such a High Failure Rate for Data Migration Programs? External Document © 2018 Infosys Limited
---
Page: 3 / 4
---
An innovative data migration test strategy is critical to the success of the change initiatives undertaken by the business. The test strategy should be prepared in close collaboration with the business team as they are a vital stakeholder, who initiated the change resulting in data migration. The two principal components which should be considered as part of the test strategy are: 1. Risk-Based Testing The data volumes involved in data migration projects emphasize the need for risk-based testing to provide optimum test coverage with the least risk of failure. Master test strategy can be created by ensuring proactive analysis with business and third- parties. Tables can be prioritized and bucketed based on the business criticality and sensitivity of data. Composite key agreed with the business can be used to select sample rows for validation in tables with billions of rows. 2. Data Compliance Testing It is very important that the quality assurance (QA) team is aware of the business requirements that necessitated data migration, because the change would have been to meet new government regulations or compliance requirements. The test strategy must have a separate section to validate the data for meeting all compliance regulations and standards such as Basel II, Sarbanes-Oxley (SOX), etc. Is There a Right Validation Strategy for Data Migration? A Test Approach Giving Proper Attention to Data Data migration, as mentioned earlier, is often a by-product of a major initiative undertaken by the company. So in a majority of scenarios, there would be an existing application which was performing the same functionality. It is suitable to adopt a parallel testing approach which would save effort spent to understand the system functionality. The testing can be done in parallel with development in sprints, following an agile approach to avoid the risk of failure at the last moment.<|endoftext|>1. Metadata Validation Data migration testing considers information that describes the location of each source such as the database name, filename, table name, field or column name, and the characteristics of each column, such as its length and type, etc. as part of metadata. Metadata validation must be done before the actual data content is validated, which helps in early identification of defects which could be repeated across several rows of data.<|endoftext|>2. Data Reconcilliation Use automated data comparison techniques and tools for column to column data comparison. There could be duplicate data in legacy systems and it has to be validated that this is merged and exists as a single entity in the migrated system. Sometimes the destination data stores do not support the data types from the source and hence the storage of data in such columns have to be validated for truncation and precision. There could be new fields in the destination data store and it has to be validated that these fields are filled with values as per the business rule for the entity.<|endoftext|>Benefits A well thought-out data migration validation strategy helps to make the data migration highly predictable and paves the way for a first-time right release. Regular business involvement helps to maintain the testing focus on critical business requirements. A successful implementation of the shift-left approach in the migration test strategy helps identify defects early and save cost.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 4 / 4
---
Case Study: Re-Platforming of Existing HP NEOVIEW Data Warehouse to Teradata The Client One of the largest super market chains in the United Kingdom which offers online shopping, DVD rentals, financial services, and multiple store locations.<|endoftext|>The Objectives • To complete the re-platform of HP Neoview to Teradata and re-platform associated services before HP discontinued support to Neoview • To migrate the existing IT business services currently operating against a Neoview data warehouse onto a Teradata warehouse with minimal disruption • To improve the performance of current Ab-Initio ETL batch processes and reporting services using Microstrategy, SAS, Pentaho, and Touchpoint The QA Solution The validation strategy was devised to ensure that the project delivered a like-for-like, ‘lift-and-shift’. This project had environmental challenges and dependencies throughout the entire execution cycle. SIT phase overcame all the challenges by devising strategies that departed from traditional testing approach in terms of flexibility and agility. The testing team maintained close collaboration with the development and infrastructure teams while maintaining their independent reporting structure. The approach was to maximize defect capture within the constraints placed on test execution.<|endoftext|>It was planned to have individual tracks tested independently on static environment and then have an end- to-end SIT, where all the applications / tracks are integrated. Testing was always focused on migrating key business functions on priority such as sales transaction management, merchandise and range planning, demand management, inventory management, price and promotion management, etc.<|endoftext|>The Benefits • 15% reduction in effort through automation using in-house tools • 100% satisfaction in test output through flexibility and transparency in every testing activity achieved through statistical models to define acceptance baseline End Notes 1. Gartner, “Risks and Challenges in Data Migrations and Conversions,” February 2009 2. https://www.hds.com/go/cost-efficiency/pdf/white-paper-reducing-costs-and-risks-for-data-migrations.pdf © 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document |
Continue # Infosys Whitepaper
. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
# Infosys Whitepaper
Title: End-to-end test automation – A behavior-driven and tool-agnostic approach
Author: Infosys Limited
Format: PDF 1.7
---
Page: 1 / 8
---
PERSPECTIVE END-TO-END TEST AUTOMATION – A BEHAVIOR-DRIVEN AND TOOL- AGNOSTIC APPROACH Anand Avinash Tambey Product Technical Architect, Infosys Abstract In today’s fast changing world, IT is under constant pressure to deliver new applications faster and cheaper. The expectation from the quality assurance (QA) organization is to make sure that all the applications are tuned to deliver to every rising user expectation across devices, locations, and typically, at no additional cost. And with an exponential growth in the diversity and number of end users in almost all sectors, requirements are fairly fluctuating and demanding too.<|endoftext|>
---
Page: 2 / 8
---
Let us discuss these approaches in detail to face the above challenges. How to face these challenges Challenges l Technical complexity l Infrastructure, licensing, and training costs l Late involvement of users l Late involvement of testers Approaches l De-skilling l Using open source stack l Using behavior-driven techniques l Utilizing the testers and users effectively This fast-paced software engineering advancement is also posing challenges to software engineers to build an ecosystem that enables rapid prototyping and design, agile development and testing, and fully automated deployment. For the QA community, this translates to a need to maximize automation across all stages of the software engineering and development life cycle and more importantly, do it in an integrated fashion. Consequently, `extreme automation’ is the new mantra for success. And there is more. According to the 2014 State of DevOps report, high-performing organizations are still deploying code 30 times more frequently, with 50 percent fewer failures than their lower-performing counterparts. High IT performance leads to strong business performance, helping boost productivity, profitability, and market share. The report counts automated testing as one of the top practices correlated with reducing the lead time for changes.<|endoftext|>However, automated testing puts together another set of challenges. The latest technologies stack advocates multiple choices of test automation tools, platform- specific add-ins, and scripting languages. There is no inherent support available for generic, tool-agnostic, and scriptless approach with easy migration from one tool to another. Therefore, a significant investment in training, building expertise, and script development is required to utilize these tools effectively. The cost and associated challenges inadvertently affect the time-to-market, profitability, and productivity although it also creates an opportunity to resolve the issues using a combination of an innovative tool-agnostic approach and latest industry practices such as behavior-driven development (BDD) and behavioral-driven test (BDT).<|endoftext|>Business challenges A persistent need of businesses is to reduce the time between development and deployment. QA needs to evolve and transform to facilitate this. And this transformation requires a paradigm shift from conventional QA in terms of automation achieved in each life cycle stage and across multiple layers of architecture. Technical complexity The technology and platform stack is not limited to traditional desktop and the web for current application portfolios. It extends to multiple OS (platforms), mobile devices, and the newest responsive web applications.<|endoftext|>Infrastructure, licensing, and training costs To test diverse applications, multiple test automation tools need to be procured (license cost), testing environment needs to be set up (infrastructure), and the technical skills of the team need to be brought to speed with training and self- learning / experimentation (efforts). Late involvement of users The end user is not involved in the development process until acceptance testing and is totally unaware of whether the implemented system meets her requirements. There is no direct traceability between the requirements and implemented system features. Late involvement of testers Testing and automation also need to start much earlier in the life cycle (Shift- Left) with agility achieved through the amalgamation of technical and domain skills of the team, as well as the end user. External Document © 2018 Infosys Limited
---
Page: 3 / 8
---
Using open-source stack To reduce the cost of commercial tool license and infrastructure, utilize open- source tools and platforms.<|endoftext|>De-skilling Using easy modeling of requirements and system behaviors, accelerated framework, and automated script generators, reduces the learning curve and the dependency on expert technical skills.<|endoftext|>Using behavior-driven techniques Behavioral-driven development and testing (BDD and BDT) is a way to reduce the gap between the end user and the actual software being built. It is also called `specification by example.’ It uses natural language to describe the `desired behavior’ of the system in a common notation that can be understood by domain experts, developers, testers, and the client alike, improving communication. It is a refinement of practices such as test-driven development (TDD) and acceptance test-driven development (ATDD).<|endoftext|>The idea behind this approach is to describe the behaviors of the system being built and tested. The main advantage is that the tests verifying the behaviors reflect the actual business requirements / user stories and generate the live documentation of the requirements, that is, successful stories and features as test results. Therefore, the test results generated can be read and understood by a non-technical person, such as a project sponsor, a domain expert, or a business analyst, and the tests can be validated. Utilizing testers and users effectively Our accelerated automation approach provides a simple modeling interface for a scriptless experience and thereby utilizes non-technical staff effectively. It introduces the `outside-in’ software development methodology along with BDT, which has changed the tester’s role dramatically in recent years, and bridges the communication gap between business and technology. It focuses on implementing and verifying only those behaviors that contribute most directly to the business outcomes.<|endoftext|>Solution approach Our solution approach is threefold, to resolve challenges in a holistic way. It applies a behavior-driven testing approach with a tool-agnostic automation framework, while following an integrated test life cycle vision. It ensures that business users and analysts are involved.<|endoftext|>It provides the flexibility of using any tool chosen and helps save cost and effort. It also provides a simple migration path to switch between tools and platforms, if such a case arises. With an integrated test life cycle approach, it ensures seamless communication between multiple stakeholders and leverages industry- standard tools.<|endoftext|>External Document © 2018 Infosys Limited
---
Page: 4 / 8
---
Finally, it introduces automation in early stages to realize the benefit of Shift-Left.<|endoftext|>Behavior-driven test (BDT) To facilitate real-time traceability of user stories / requirements and to aid live documentation, we have implemented a BDT approach across multiple Infosys projects as described below.<|endoftext|>This approach acted as a single point of continuous interaction between the tester and the business users. The user story is divided into various scenarios by the testers into a feature file.<|endoftext|>l These scenarios are written using Gherkin language. These scenarios include the business situation, the pre-ions, the data to be used, and the acceptance criteria.<|endoftext|>l The end user signs off the features / scenarios. This provides the user control to execute and validate the scenarios based on the data as per the user need and bring up feature reports / dashboard. l The underlying technical implementation is abstracted from the business user.<|endoftext|>l Tester creates the underlying test scripts for the scenarios which could be a business layer test script, service test script, or a UI automated test script.<|endoftext|>l The tool then converts the scenarios to `step definitions,’ which act as the binder between the test scripts and the scenarios. This ensures that a single point / interface is used to execute any type of tests.<|endoftext|>Tester User story End user Zero distance with user User aware of feature success / failures early Shift-Left Early test automation from day one UI Testing framework (BDT) Reports in user's language Feature fles Step defnition 11% Back to Jenkins Tag Overview Cucumber Reports Feature Overview for Build: 15 The following graph shows passing and falling statistics for features in this build: Failed Skipped Pending Passed Steps Scenarios 89% 92% Passed Failed Feature Statistics Cucumber-JVM Jenkins report plugin - 06-07-2012 22:31:55 Account holder withdraws cash project |
Continue # Infosys Whitepaper
2 8 40 40 0 0 0 113 ms passed 1 9 5 1 3 0 4 ms failed 8 40 40 0 0 0 112 ms passed 1 9 5 1 3 0 4 ms failed Account holder withdraws More cash project 2 Account holder withdraws more cash Account holder withdraws cash Feature Scenarios Steps Passed Failed Skipped Pending Duration Status 4 18 98 90 2 6 0 233ms Totals External Document © 2018 Infosys Limited
---
Page: 5 / 8
---
Tool- / platform-agnostic solution (BDD release) Tool-agnostic approach To remove dependencies on technical skills, tools, and platforms, our solution proposes modeling of system behaviors using generic English-like language via an intuitive user interface. This model would be agnostic to any specific tool or UI platform and reusable.<|endoftext|>This model will be translated into a widely acknowledged format, XML, and will act as an input to generate automation scripts for specific tools and platforms. Finally, it would integrate with a continuous integration platform to achieve an end-to- end automation of build-test-deploy cycle.<|endoftext|>BDD modeling UI Scriptless, tool- / platform-agnostic implementation Business analyst / end user User story Features Fields and step defnitions QTP Selenium SAHI Protractor Tester Execution AUT Continuous integration Build Test Deploy Script generator External Document © 2018 Infosys Limited
---
Page: 6 / 8
---
Integrated test life cycle The role of the traditional tester and end users is changing in the era of DevOps and Shift-Left. The integrated-solution approach enables a larger stakeholder Benefits Reduced time-to-market l Shift-Left, early automation, and early life cycle validation l Single-click generation and execution of automated scripts Reduced cost l 40–60 percent reduction in effort for automated test case generation over manual testing l Detailed error reporting reduces defect reporting effort considerably base to contribute towards the quality of the system under development. It also ensures the satisfaction of stakeholders via early validation and early feedback of system progress while using the industry- standard toolsets seamlessly.<|endoftext|>l Easy maintenance of requirements, stories, features, and the automated test suite l No additional cost involved in building integration components for test management tools (HP-ALM) l The agnostic approach works for a broad range of applications irrespective of tools, technology, and platform Improved quality l Enhanced business user participation and satisfaction due to the live documentation of features and user stories, available at fingertips l Developer, tester, and client collaboration possible due to a common language l High defect detection rates (95–99 percent) due to high test coverage Requirement analysis Test design Test execution Test reporting Tester Enhances automation libraries Generates requirements and features HP ALM/QC IBM RQM JIRA Jenkins Others Executes automation scripts Generates test reports Designs business processes Generates automation scripts Tester BA BA Automation expert QA manager / end user BDD & UML modeling tool BDD & UML modeling tool Features and metrics reports Tool-agnostic, platform-agnostic automation accelerator Integration With Popular Tools Test Life Cycle Management View External Document © 2018 Infosys Limited
---
Page: 7 / 8
---
References https://puppetlabs.com/sites/default/files/2014-state-of-devops-report.pdf http://www.ibm.com/developerworks/library/a-automating-ria/ http://guide.agilealliance.org/guide/bdd.html http://www.infosysblogs.com/testing-services/2015/08/extreme_automation_ the_need_fo.html Conclusion Our solution approach is the first step towards reducing the complexity of test automation and making it more useful for the end user by providing early and continuous feedback to the incremental system development. Moreover, it advances automation at every level to achieve rapid development and faster time-to-market objectives.<|endoftext|>With the advent of multiple technologies and high-end devices knocking at the door, using a tool- and platform-agnostic approach will increase overall productivity while reducing the cost of ownership. External Document © 2018 Infosys Limited
---
Page: 8 / 8
---
© 2018 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document. For more information, contact askus@infosys.com Infosys.com | NYSE: INFY Stay Connected
***
|
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 8