Friday, July 31, 2015



1.Requirements Gathering /Analysis
  • Performance team interacts with the client for identification and gathering of requirement – technical and business.
  • This includes getting information on application’s architecture, technologies and database used, intended users, functionality, application usage, test requirement, hardware & software requirements etc.

2. POC/Tool Selection
  • Once the key functionality are identified, POC (proof of concept – which is a sort of demonstration of the real time activity but in a limited sense) is done with the available tools.
  • The list of available performance tools depends on cost of tool, protocol that application is using, the technologies used to build the application, the number of users we are simulating for the test, etc. During POC, scripts are created for the identified key functionality and executed with 10-15 virtual users.

3. Performance Test Plan & Design:
  • Depending on the information collected in the preceding stages, test planning and designing is conducted.
  • Test Planning involves information on how the performance test is going to take place – test environment the application, workload, hardware, etc.
  • Test designing is mainly about the type of test to be conducted, metrics to be measured, Metadata, scripts, number of users and the execution plan.
  • During this activity, a Performance Test Plan is created. This serves as an agreement before moving ahead and also as a road map for the entire activity. Once created this document is shared to the client to establish transparency on the type of the application, test objectives, prerequisites, deliverable, entry and exit criteria, acceptance criteria etc.

Briefly, a performance test plan includes:

a) Introduction (Objective and Scope)
b) Application Overview
c) Performance (Objectives & Goals)
d) Test Approach (User Distribution, Test data requirements, Workload criteria, Entry & Exit criteria, Deliverable, etc.)
e) In-Scope and Out-of-Scope
f) Test Environment (Configuration, Tool, Hardware, Server Monitoring, Database, test configuration, etc.)
g) Reporting & Communication
h) Test Metrics
I) Role & Responsibilities
j) Risk & Mitigation
k) Configuration Management


4. Performance Test Development:

  • Use cases are created for the functionality identified in the test plan as the scope of PT.
  • These use cases are shared with the client for their approval. This is to make sure the script will be recorded with correct steps.
  • Once approved, script development starts with a recording of the steps in use cases with the performance test tool selected during the POC (Proof of Concepts) and enhanced by performing Correlation (for handling dynamic value), Parameterization (value substitution) and custom functions as per the situation or need. More on these techniques in our video tutorials.
  • The Scripts are then validated against different users.
  • Parallel to script creation, performance team also keeps working on setting up of the test environment (Software and hardware).

5.Performance Test Modeling
  • Performance Load Model is created for the test execution.
  • The main aim of this step is to validate whether the given Performance metrics (provided by clients) are achieved during the test or not.
  • There are different approaches to create a Load model. "Little's laws” is used in most cases.


6. Test Execution
  • The scenario is designed according to the Load Model in Controller or Performance Center but the initial tests are not executed with maximum users that are in the Load model.
  • Test execution is done incrementally. For example: If the maximum number of users are 100, the scenarios is first run with 10, 25, 50 users and so on, eventually moving on to 100 users.


7. Test Execution
  • Test results are the most important deliverable for the performance tester. This is where we can prove the ROI (Return on Investment) and productivity that a performance testing effort can provide.

Load Testing Entry and Exit Criteria's

Entry Criteria

It is important to define and validate the load testing entry criteria before start testing. Following are the points which needs to considered:
  • Regression testing should be completed and the environment should be stable without any defects
  • Test Strategy & Plan should  be reviewed and signed off by the architects and clients
  • The database set up should have been completed
  • The testing, monitoring tools required should have been installed before script development and test execution phases
  • The testing team should have access to testing tools and server machines
  • All the project teams such as development team, environment team etc. should be aware of performance testing cycles to keep the servers in quiet mode during testing
  • All user processes that consume more memory and CPU need to be turned off from all load generator consoles


Exit Criteria

After successful testing gets completed, following points should be considered:
  • All the performance test cases defined by the non-functional requirements document should be validated
  • All the test scenarios defined in the test strategy document should have been executed successfully
  • The test scenarios should simulate real-business mix of transactions
  • Response time should be captured for all transactions from each test run
  • The test results should include all necessary graphs for system resources and application metrics specified in the strategy document
  • All the test types defined in the test strategy document should have been executed successfully
  •   Transaction failure rate of 2-5% is accepted at peak load

Introduction to Load Runner

Load Runner is an industry-leading performance and load testing product by Hewlett-Packard (since it acquired Mercury Interactive in November 2006) for examining system behavior and performance, while generating actual load.

Load Runner supports various development tools, technologies and communication protocols. In fact this is the only tool in market which supports such large number of protocols to conduct performance testing.

Broadly, LoadRunner supports RIA (Rich Internet Applications), Web 2.0 (HTTP/HTML, Ajax, Flex and Silverlight etc.),Mobile, SAP, Oracle, MS SQL Server, Citrix, RTE, Mail and above all, Windows Socket. There is no competitor tool in the market which could offer such wide variety of protocols vested in single tool.

What is more convincing to pick LoadRunner for performance testing is the credibility of this tool. LoadRunner has long established reputation as often you will find clients cross verifying your performance benchmarks using LoadRunner. You’ll find relief if you're already using LoadRunner for your performance testing needs.
LoadRunner is tightly integrated with other HP Tools like Unified Functional Test (QTP) & ALM (Application Lifecycle Management) with empowers you to perform your end to end Testing Processes. 
LoadRunner works on a principal of simulating Virtual Users on the subject application. These Virtual Users, also termed as VUsers, replicate client's requests and expect corresponding response to pass a transaction.


Why do you need Performance Testing?
An estimated loss of 4.4 billion in revenue is recorded annually due to poor web performance.
In today's age of Web 2.0, users click away if a website doesn't respond within 8 seconds. Imagine yourself waiting for 5 seconds when searching over Google or making a friend request on Facebook. The repercussions of performance downtime are often more devastating than ever imagined. We've examples such as those that recently hit Bank of America Online Banking, Amazon Web Services, Intuit or Blackberry.
According to Dunn & Bradstreet, 59% of Fortune 500 companies experience an estimated 1.6 hours of downtime every week. Considering the average Fortune 500 Company with a minimum of 10,000 employees is paying $56 per hour, the labor part of downtime costs for such an organization would be $896,000 weekly, translating into more than $46 million per year.
Only a 5 minute downtime of Google.com (19-Aug-13) is estimated to cost the search giant as much as $545,000.
Its estimate that companies lost sales worth $1100 per second due to a recent Amazon Web Service Outage.
When a software system is deployed by an organization, it may encounter many scenarios that possibly result in performance latency. A number of factors cause decelerating performance, few example may include:
·         Increased number of records present in the database
·         Increased number of simultaneous requests made to the system
·         larger number of users accessing the system at a time as compared to the past

What is LoadRunner Architecture?

Broadly speaking, the architecture of LoadRunner is complex, yet easy to understand.

Suppose you are assigned to check performance of Amazon.com for 5000 users
   In a real life situation, these all these 5000 users will not be at homepage but in different section of the websites. How can we simulate different 
VUGen:
VUGen or Virtual User Generator is an IDE (Integrated Development Environment) or a rich coding editor. VUGen is used to replicate System Under Load (SUL) behavior. VUGen provides a "recording" feature which records communication to and from client and Server in form of a coded script - also called VUser script.
So considering the above example, VUGen can record to simulate following business processes:
1.     Surfing the Products Page of Amazon.com
2.     Checkout
3.     Payment Processing
4.     Checking My Account Page

Controller:
Once a VUser script is finalized, Controller is the main component which controls the Load simulation by managing, for example:
·         How many VUsers to simulate against each business process or VUser Group
·         Behavior of VUsers (ramp up, ramp down, simultaneous or concurrent nature etc.)
·         Nature of Load scenario e.g. Real Life or Goal Oriented or verifying SLA
·         Which injectors to use, how many VUsers against each injector
·         Collate results periodically
·         IP Spoofing
·         Error reporting
·         Transaction reporting etc.
Taking analogy from our example controller will add following parameter to the VUGen Script
1) 3500 Users are Surfing the Products Page of Amazon.com
2) 750 Users are in Checkout
3) 500 Users are performing Payment Processing
4) 250 Users are  Checking My Account Page ONLY after 500 users have done Payment Processing
Even more complex scenarios are possible
1.     Initiate 5 VUsers every 2 seconds till a load of 3500 VUsers (surfing Amazon product page) is achieved.
2.     Iterate for 30 minutes
3.     Suspend iteration for 25 VUsers
4.     Re-start 20 VUsers
5.     Initiate 2 users (in Checkout, Payment Processing , My Accounts Page) every second.
6.     2500 VUsers will be generated at Machine A
7.     2500 VUsers will be generated at Machine B

Agents Machine/Load Generators/Injectors
LoadRunner Controller is responsible to simulate thousands of VUsers - these VUsers consume hardware resources for example processor and memory - hence putting a limit on the machine which is simulating them. Besides, Controller simulates these VUsers from the same machine (where Controller resides) & hence the results may not be precise. To address this concern, all VUsers are spread across various machines, called Load Generators or Load Injectors.
As a general practice, Controller resides on a different machine and load is simulated from other machines. Depending upon the protocol of VUser scripts and machine specifications, a number of Load Injectors may be required for full simulation. For example, VUsers for an HTTP script will require 2-4MB per VUser for simulation, hence 4 machines with 4 GB RAM each will be required to simulate a load of 10,000 VUsers.
Taking Analogy from our Amazon Example, the output of this component will be
Analysis:
Once Load scenarios have been executed, the role of "Analysis" component comes in.
During the execution, Controller creates a dump of results in raw form & contains information like, which version of LoadRunner created this results dump and what were configurations.
All the errors and exceptions are logged in a Microsoft access database, named, output.mdb. The "Analysis" component reads this database file to perform various types of analysis and generates graphs.
These graphs show various trends to understand the reasoning behind errors and failure under load; thus help figuring whether optimization is required in SUL, Server (e.g. JBoss, Oracle) or infrastructure.
Below is an example where bandwidth could be creating bottleneck. Let's say Web server has 1GBps capacity whereas the data traffic exceeds this capacity causing subsequent users to suffer. To determine system caters to such needs, Performance Engineer needs to analyze application behavior with abnormal load. Below is a graph LoadRunner generates to elicit bandwidth.


File Extensions: 
VuGen : .Usr is the File Extension

Controller : .lrs is the file Extension ==> .lrr 

Analysis : .lra is the File Extension



Thursday, July 30, 2015

Common Performance Problems

Most performance problems revolve around speed, response time, load time and poor scalability. Speed is often one of the most important attributes of an application. A slow running application will lose potential users. Performance testing is done to make sure an app runs fast enough to keep a user's attention and interest. Take a look at the following list of common performance problems and notice how speed is a common factor in many of them:
  • Long Load time - Load time is normally the initial time it takes an application to start. This should generally be kept to a minimum. While some applications are impossible to make load in under a minute, Load time should be kept under a few seconds if possible.
  • Poor response time - Response time is the time it takes from when a user inputs data into the application until the application outputs a response to that input. Generally this should be very quick. Again if a user has to wait too long, they lose interest.
  • Poor scalability - A software product suffers from poor scalability when it cannot handle the expected number of users or when it does not accommodate a wide enough range of users. Load testing should be done to be certain the application can handle the anticipated number of users.
  • Bottle-necking - Bottlenecks are obstructions in system which degrade overall system performance. Bottlenecking is when either coding errors or hardware issues cause a decrease of throughput under certain loads. Bottlenecking is often caused by one faulty section of code. The key to fixing a bottlenecking issue is to find the section of code that is causing the slow down and try to fix it there. Bottle necking is generally fixed by either fixing poor running processes or adding additional Hardware.

  • Some common performance bottlenecks are
    • CPU utilization
    • Memory utilization
    • Network utilization
    • Operating System limitations
    • Disk usage

Types of Performance Testing

Below are the listed types of Performance Testings
Load Testing
  • Checks the application's ability to perform under anticipated user loads. The objective is to identify performance bottlenecks before the software application goes live.
  • Checking system’s Performance by constantly increasing the load on the system until the time load is reaches to its threshold value.
  • Load testing is perform to make sure that what amount of load can be withstand the application under test.
  •  Load testing comes under the Non Functional Testing & it is designed to test the non-functional requirements of a software application.
Simple examples of Load Testing:
  • Testing printer by sending large job. 
  • Editing a very large document for testing of word processor
  • Continuously reading and writing data into hard disk.
  • Running multiple applications simultaneously on server.
  • Testing of mail server by accessing thousands of mailboxes

Stress Testing:
  • Involves testing an application under extreme workloads to see how it handles high traffic or data processing .The objective is to identify breaking point of an application.
  • To determine or validate an application’s behavior when it is pushed beyond normal or peak load conditions.
  • Stress testing is Negative testing where we load the software with large number of concurrent users/processes which cannot be handled by the systems hardware resources.
  • Stress testing comes under the Non-Functional Testing & it is designed to test the non-functional requirements of a software application.

Endurance Testing
  •  Is done to make sure the software can handle the expected load over a long period of time.
  • Endurance testing involves testing a system with an expected amount of load over a long period of time to find the behavior of system.
  • Let’s take an example where system is designed to work for 3 hrs. Of time but same system endure for 6 hrs. Of time to check the staying power of system. Most commonly test cases are executed to check the behavior of system like memory leaks or system fails or random behavior.
  • Sometimes endurance testing is also referred as Soak testing.
  • Soak testing involves testing a system with a significant load extended over a significant period of time

Spike Testing
  • Spike testing is subset of Stress Testing.
  • Tests the software's reaction to sudden large spikes in the load generated by users.
  • Spike testing is done by suddenly increasing the load generated by a very large number of users, and observing the behavior of the system. The goal is to determine whether performance will suffer, the system will fail, or it will be able to handle dramatic changes in load.

Volume Testing

  • Under Volume Testing large no. of. Data is populated in database and the overall software system's behavior is monitored. The objective is to check software application's performance under varying database volumes.
  • The purpose of volume testing is to determine system performance with increasing volumes of data in the database.

Scalability Testing:
  • The objective of scalability testing is to determine the software application's effectiveness in "scaling up" to support an increase in user load. It helps plan capacity addition to your software system.
  • It is a type of non-functional testing.
  • It is a type of Software testing that test the ability of a system, a network, or a process to continue to function well, when it is changed in size or volume in order to meet a growing need.
  • It is the testing of a software application for measuring its capability to scale up in terms of any of its non-functional capability like load supported, the number of transactions, the data volume etc.

Performance Testing

Introduction:
  • How much pounds of weight you can lift in gym? 200 pounds, 300 pounds, 1000 pounds? Every person has a breaking point, mentally, physically and intellectually. The same principal applies in load testing as well.
  • How much load can server handle at a specific point of time? 100 users, 200 users, 10K users? Every server has a breaking point.
  • The main objective of load testing is to study the behaviors of the application under test (AUT) under specific load circumstances.
  • Load testing helps engineers to understand the applications’ behavior and availability.


Load testing addresses following queries:
·         Does the latest codes shows desired performance?
·         Does the new change in the hardware configuration perform well?
·         Does the new business rule or feature that implemented makes significant effect?
·         Does the upgrade in the environment works under limits?

What is Performance Testing?
Performance testing is the process of determining the speed or effectiveness of System under Test and also to verifying quality attributes of the system like responsiveness, Speed, Scalability, Stability under variety of load conditions.

Performance can be classified into three main categories:
• Speed — does the application respond quickly enough for the intended users?
• Scalability — Will the application handle the expected user load and beyond?
• Stability — is the application stable under expected and unexpected user loads?

Why performance testing?


• Does the application respond quickly enough for the intended users?
• Will the application handle the expected user load and beyond?
• Will the application handle the number of transactions required by the business?
• Is the application stable under expected and unexpected user loads?