Performance Testing:
Performance testing is a process by which we validate whether the application meets all its Performance related Non-Functional Requirements. Performance testing also helps to identify the potential performance bottlenecks that prevent the application from meeting its Non-Functional Requirements.
Non Functional Requirements:
Response time for each user action (Eg. Page navigation, Search, Submits etc)
Average and Peak Transactions Per second
Average and Maximum concurrent users
Availability (Eg. 24x7)
The server utilization at peak load in terms of CPU & Memory utilization
When to do Performance Testing:
Capacity assessment for a new application environment
Early performance testing at unit level
Before a new application goes to production
Whenever there is any a functional change or enhancement made to the application
Migration or upgrading of existing hardware or software
To identify the reasons for existing performance issues or bottlenecks for a live application
Types of performance Testing:
Load
Stress
Spike
Volume
Endurance
Load Test:
Determine if an application can meet a desired service level under real world volumes
During this test the Vusers will be gradually ramped up to the desired load to ensure that the application can smoothly scale up to the required load.
Stress Test:
Determine the maximum load (typically number of concurrent users/transaction) that the application can service i.e. the application’s breaking point
The users will be ramped up to the point where there is degradation in the application performance.
This test helps to identify the breaking point of the application and the maximum users the system can handle without any significant performance degradation
Spike Test:
Simulate a sudden increase in the number of concurrent users performing a specific transaction to determine the server behavior under abnormal traffic conditions
Checking the system stability and availability under these conditions
Volume Test:
Testing the application performance for the current and future database volumes
To ensure application can handle maximum size of data values
Measures the pattern of response when the database size is increased
Checks for Disk Space usage and processor times.
Endurance Test:
Subject an application to a pre-defined set of transaction scenario continuously and repetitively for an extended period of time to find out the small problems that grow over a period of time (e.g. memory leaks)
To check the availability of the system over a long period time
Enables to find memory release and usage in applications
Comman term in PT:
Application Under Test (AUT): The software application(s) being tested
System Under Test (SUT): The hardware & operating environment(s) being tested
Virtual User: Software process that simulates real user interactions with the AUT
Navigation Flow: A user function within the AUT
Scenario: A set of navigation flow defined for a set of virtual users to execute.
Think Time: Time taken by the user between page clicks
Ramp up: Gradual increase of Vusers during controller execution
Throughput: It is the amount of work that a computer can do in a given time period
Transaction: A subsection of the measured workflow
More granular user events for which response time will be measured
Bottleneck: A load point at which the SUT/AUT suffers significant degradation
Breakpoint: A load point at which the SUT/AUT suffers degradation to the point of malfunction
Scalability: The relative ability or inability of the AUT/SUT to produce consistent measurements regardless of size of workload
Response Time: The time elapsed between when a request is made and when that request is fulfilled
Vuser Scripts
• The actions that a Vuser performs during the scenario are described in a Vuser script.
• When you run a scenario, each Vuser executes a Vuser script. Vuser scripts include functions that measure and record the performance of the server during the scenario.
Transactions
• To measure the performance of the server, you define transactions.
• Transactions measure the time that it takes for the server to respond to tasks submitted by Vusers.
Rendezvous Points
• You insert rendezvous points into Vuser scripts to emulate heavy user load on the server.
• Rendezvous points instruct multiple Vusers to perform tasks at exactly the same time.
• For example, to emulate peak load on the bank server, you insert a rendezvous point to instruct 100 Vusers to simultaneously deposit cash into their accounts.
Controller
• You use the LoadRunner Controller to manage and maintain your scenarios.
• Using the Controller, you control all the Vusers in a scenario from a single workstation.
Hosts
• When you execute a scenario, the LoadRunner Controller distributes each Vuser in the scenario to a host.
• The host is the machine that executes the Vuser script, enabling the Vuser to emulate the actions of a human user.
Performance Analysis
• Vuser scripts include functions that measure and record system performance during load-testing sessions.
• During a scenario run, you can monitor the network and server resources.
• Following a scenario run, you can view performance analysis data in reports and graphs.
VuGen (Virtual User Generator) – records Vuser Scripts that emulate the steps of real Users using the application
• The Controller is an administrative center for creating, maintaining, and executing scenarios. Starts and stops load tests, and perform other Administrative tasks
LR Analysis uses the load test results to create graphs and reports that are used to correlate system information and identify both bottlenecks and performance issues.
Tuning helps you quickly isolate and resolve performance bottlenecks. By adding a centralized tuning console to LoadRunner, the Mercury Tuning Module ensures that performance bottlenecks are resolved during testing, and helps you determine the optimized configuration settings for production.
• VuGen records Vuser Scripts that emulate the steps of real users using the application
• VuGen not only records Vuser scripts, but also runs them. Running scripts from VuGen is useful for debugging
VuGen records sessions on Windows platforms only. However, a recorded Vuser script can run on both Windows and UNIX platform
Process of Recording Script
• Record a basic script
• Enhance the basic script by adding the control-flow statements and other Mercury API functions into the Script
• Configure the Run-time settings
• Verify that the script runs correctly, run it in stand-alone mode
• Integrate into your test : a LoadRunner scenario, Performance Center load test, Tuning module session, Business process monitor profile
What We can Do?
Ø Set up recording options
Ø Record the scripts
Ø Add Comments
Ø Insert Start and End Transactions
Ø Perform Correlation
Ø Add Checks
Ø Add C programming Statements wherever required.
Ø Insert Load Runner Functions if required.
Ø Do Parameterization.
Ø Add Rendezvous Point
Ø Create Multiple actions If required.
Ø Perform Run Time Settings
Enhancing Vuser Script
Ø Inserting Transactions into Vuser Script
Ø Inserting Rendezvous point
Ø Inserting Comments
Ø Obtaining Vuser Information
Ø Sending Messages to output
Ø Log Messages
Ø Lr_log_message
Ø Debug Messages
Ø Lr_set_debug_message
Ø Lr_debug_message
Ø Error and Output Messages
Ø Lr_error_message
Ø Lr_output_message
Handling errors on Vuser Script during execution (Runtime settings > Miscellaneous > Error handling)
• By default when a Vuser detects an error, the Vuser stops the execution
• You can use the lr_continue_on_error function to override the continue on error runtime setting
• To mark the segment, enclose it with lr_continue_on_error(1); and lr_continue_on_error(0); statements
Synchronizing Vuser Script
• Synchronize the execution of Vuser script with the output from your application
• Synchronize applies only to RTE Vuser Scripts
Parameterizing
• Parameterization involves the following two tasks:
• Replacing the constant values in the Vuser script with parameters
• Setting the properties and data source for the parameters
• Parameterization Limitations
• You can use parameterization only for the arguments within a function
• You can’t parameterize text strings that are not function arguments
Paramerterzion:
Update Value on
• Each iteration
Ø Instructs the Vuser to use a new value for each script iteration
• Each occurrence
Ø Instructs the Vuser to use a new value for each occurrence of the parameter
• Once
Ø Instructs the Vuser to update the parameter value only once during the execution
• Save the results using Web_reg_save_param and lrs_save_param
Controller
What is Scenario?
A scenario is a file that defines the Vusers execution, the number of Vusers to run, the goals of the test, the computer that hosts the Vusers, and the conditions under which to run the Load Test
• Controller organizes and manages scenario elements
• During scenario execution the controller :
• Runs Vuser Groups
• Controls the initialize, run, pause, and stop conditions of each Vuser
• Displays the status of each Vuser
• Displays any messages from Vusers
• Monitors system and network resources
• Types of Scenarios
Ø Manual Scenario
Manage your Load Test by specifying the number of Virtual users to run
Ø Goal-Oriented Scenario
Allow LoadRunner Controller to create a Scenario based on the goals you specify
• Manual Scenario
• You control the number of Running Vusers at the time which they Run.
• You can specify how many Vusers run simultaneously
• Allows you to run the Vuser in Percentage mode
• Goal-Oriented Scenario
• Determine your system to achieve the particular goal
• The goal may be number of hits per second, Number of transaction per second, etc.,
• Manages Vusers Automatically to maintain and achieve the goal
Vuser Groups
• Scenario consists of group of Vusers which emulate the Human users to interact with your application
• Each script you select is assigned a Vuser group
• Each Vuser group is assigned a number of Vusers
• You can Assign different script to each Vuser or You can assign the same script to all the Vusers
• Adding Vuser Group
• Group Name
• Vuser Quantity
• Load Generator name
Load Generator for your Scenario
• Load Generator is a machine that serves as the host for running Vusers
• Its important to know that which script need to be run from which location
• For example customer activity, the function of location, workload of location…etc.,
Adding Load Generator
• Click the generators button to open the dialogue box
• Now click the add button to open the Add load generator dialogue box
• Enter the name and load generator platform which you want to add
• A machine must have installed LoadRunner agent to use as a Load Generator
Analysis:
Analysis provides graphs and reports to help you analyze the performance of your system. These graphs and reports summarize the scenario execution
Using these graphs and reports, you can easily pinpoint and
identify the bottlenecks in your Application
To view a summary of the results after test execution, you can use one or more of the following tools:
• Vuser log files contain a full trace of the scenario run for each Vuser. These files are located in the scenario results directory.
• Controller Output window displays information about the scenario run.
• Analysis graphs help you determine system performance and provide information about transactions and Vusers.
• Graph Data and Raw Data views display the actual data used to generate the graph in a spreadsheet format.
• Report utilities enable you to view a Summary HTML report for each graph or a variety of Performance and Activity reports. You can create a report as a Microsoft Word document, which automatically summarizes and displays the test’s significant data in graphical and tabular format.
Creating Analysis Session
• When you run a scenario, data is stored in a result file with an .lrr extension. Analysis is the utility that processes the gathered result information and generates graphs and reports.
• When you work with the Analysis utility, you work within a session. An Analysis session contains at least one set of scenario results (lrr file). Analysis stores the display information and layout settings for the active graphs in a file with an .lra extension.
Methods of opening LoadRunner Analysis
• Open Analysis directly from the controller (Results > Analyze Results)
• Start > Programs > Mercury LoadRunner > Applications > Analysis
• Start > Programs > Mercury LoadRunner > LoadRunner, select the Load Testing or Tuning tab, and then click Analyze Load Tests or Analyze Tuning Sessions.
• You can also instruct controller to open analysis automatically after the Scenario execution by selecting Results > Auto Analysis
Collating Execution Results
• When you run a scenario, by default all Vuser information is stored locally on each Vuser host
• After scenario execution the results are automatically collated or consolidated – results from all the hosts are transfer to results directory
• You disable automatic collation by choosing Results > Auto collate Results from the controller window
• You can collate manually by selecting Results > Collate Results
• If your results are not collated Analysis will automatically collate the results before generating the analysis data
Analysis : Tools à Options
Generate Summary data only
View the summary data only. If this option is
selected Analysis won’t Process the data for
advanced use with filtration
Generate Complete data only
View only the complete data only after it has
been Processed. Do not display the Summary
Display Summary while generate
Complete data only
View summary data while the complete data is being processed. After the processing, view the complete data. A bar below the graph indicates the complete data generation progress.
Data Aggregation
Aggregate Data:
Specify the data you want to aggregate in order to reduce the size of the database.
• Select the type of data to aggregate:
Specify the type(s) of graphs for which you want to aggregate data.
• Select the graph properties to aggregate:
Specify the graph properties— Vuser ID, Group Name, and Script Name—you want to aggregate. If you do not want to aggregate the failed Vuser data, select Do not aggregate failed Vusers.
Setting Database Options
• You can choose the database in which to store Analysis session result data and you can repair and compress your Analysis results and optimize the database that may have become fragmented.
• By default, LoadRunner stores Analysis result data in an Access 2000 database.
• If your Analysis result data exceeds two gigabytes, it is recommended that you store it on an SQL server
• Analysis Graphs
User-Defined Data Point Graphs - Provide information about the custom data points that were gathered by the online monitor.
• System Resource Graphs - Provide statistics relating to the system resources that were monitored during the scenario using the online monitor.
• Network Monitor Graphs - Provide information about the network delays.
• Firewall Server Monitor Graphs - Provide information about firewall server resource usage.
• Web Server Resource Graphs - Provide information about the resource usage for the Apache, iPlanet/Netscape, iPlanet(SNMP), and MS IIS Web servers.
• Web Application Server Resource Graphs - Provide information about the resource usage for various Web application servers.
• Database Server Resource Graphs - Provide information about database resources.
• Streaming Media Graphs - Provide information about resource usage of streaming media.
• ERP/CRM Server Resource Graphs - Provide information about ERP/CRM server resource usage.
• Java Performance Graphs - Provide information about resource usage of Java-based applications.
• Application Component Graphs - Provide information about resource usage of the Microsoft COM+ server and the Microsoft NET CLR server.
• Application Deployment Solutions Graphs - Provide information about resource usage of the Citrix MetaFrame and 1.8 servers.
• Middleware Performance Graphs - Provide information about resource usage of the Tuxedo and IBM WebSphere MQ servers.
• Security Graphs - Provide information about simulated attacks on the server using the Distributed Denial of Service graph.
• Application Traffic Management Graphs - Provide information about resource usage of the F5 BIG-IP server.
• Infrastructure Resources Graphs - Provide information about resource usage of FTP, POP3, SMTP, IMAP, and DNS Vusers on the network client.
• Siebel Diagnostics Graphs - Provide detailed breakdown diagnostics for transactions generated on Siebel Web, Siebel App, and Siebel Database servers.
• Siebel DB Diagnostics Graphs - Provide detailed breakdown diagnostics for SQLs generated by transactions on the Siebel system.
• Oracle Diagnostics Graphs - Provide detailed breakdown diagnostics for SQLs generated by transactions on the Oracle NCA system.
• J2EE Diagnostics Graphs - Provide information to trace, time, and troubleshoot individual transactions through J2EE Web, application, and database servers.
Filtering & Sorting Graph Data
You can filter and sort data that is displayed in a graph. You sort and filter graph data using the same dialog box.
Filtering Graph Data
• You can filter graph data to show fewer transactions for a specific segment of the scenario.
• More specifically, you can display four transactions beginning from five minutes into the scenario and ending three minutes before the end of the scenario.
• You can filter for a single graph, in all graphs in a scenario, or in the summary graph.
Sorting Graph Data
• You can sort graph data to show the data in more relevant ways.
• For example, Transaction graphs can be grouped by the Transaction End Status, and Vuser graphs can be grouped by Scenario Elapsed Time, Vuser End Status, Vuser Status, and VuserID.
Transaction Report
• Transaction reports provide performance information about the transactions defined within the Vuser scripts. These reports give you a statistical breakdown of your results and allow you to print and export the data.
• Transaction Reports are divided into the following categories
• Activity
• Performance
Ø Data Point, Detailed Transaction, Transaction Performance by Vuser
• Activity reports provide information about the number of Vusers and the
• number of transactions executed during the scenario run. The available Activity reports are Scenario Execution, Failed Transaction, and Failed Vusers.
• Performance reports analyze Vuser performance and transaction times. The available Performance reports are Data Point, Detailed Transaction, and Transaction Performance by Vuser.
Technology Specific Monitors
Performance Details
• Web Server Metrics
• SQL Server Metrics
• Throughput versus user load
• Response time versus user load
• Resource utilization versus user load
• Potential Bottlenecks
Best Practices for Performance Testing - Do
• Clear the application and database logs after each performance test run. Excessively large log files may artificially skew the performance results.
• Identify the correct server software and hardware to mirror your production environment.
• Use a single graphical user interface (GUI) client to capture end-user response time while a load is generated on the system. You may need to generate load by using different client computers, but to make sense of client-side data, such as response time or requests per second, you should consolidate data at a single client and generate results based on the average values.
• Include a buffer time between the incremental increases of users during a load test.
• Use different data parameters for each simulated user to create a more realistic load simulation.
• Monitor all computers involved in the test, including the client that generates the load. This is important because you should not overly stress the client.
• Prioritize your scenarios according to critical functionality and high-volume transactions.
• Use a zero think time if you need to fire concurrent requests,. This can help you identify bottleneck issues.
• Stress test critical components of the system to assess their independent thresholds.
Don’t:
• Do not allow the test system resources to cross resource threshold limits by a significant margin during load testing, because this distorts the data in your results.
• Do not run tests in live production environments that have other network traffic. Use an isolated test environment that is representative of the actual production environment.
• Do not try to break the system during a load test. The intent of the load test is not to break the system. The intent is to observe performance under expected usage conditions. You can stress test to determine the most likely modes of failure so they can be addressed or mitigated.
Do not place too much stress on the client test computers
- Run until completion – Runs for the specified number of iterations
- Run for – Runs the scenario for the specified duration of time
Run indefinitely - Runs the scenario indefinitely for testing the stability of the application (volume testing)
- Load Generators at run-time:
- Results are stored locally on each load generator
- After execution, results from all load generators are
transferred to the Controller’s results directory for analysis - LoadRunner Controller at run-time
- Saves transactions and performance monitor data
- Synchronizes Vusers via rendezvous function (optional)
- Collects error and notification messages generated by Vusers
Performance Testing:
Performance testing is the process by which we validate whether the app meets all its performance related Non functional Requirements.
Non Functional Requirements:
1. Navigation, Search, Submit etc.
2. Average and peak transaction per second,
3. Average and maximum concurrent user
4. Availability 24*7
LoadRunner Types:
Load: It determines if an application can meet a desired service level real world volume.
During this test the vuser will be gradually ramped up to the desired load to ensure that the application can smoothly scale up to the desired load.
Stress: Determine the maximum load that the application service. ie: The Application breaking point.
It helps to identify the breaking point of the application and the maximum users the system can handle without any significant performance degradation..
Spike: Determines the server behavior user abnormal traffic conditions.
Sudden increase in the number of Concurrent user.
Volume: Testing the application performance for the current and future data base volume. To ensure application can handle maximum size of data values….
Check the disk space usage & processor usage.
Endurance: It subject to an application to a pre-defined set of transaction scenario continuously and repetitively for an extended period of time to find out the small problems that grows over a period of time
Eg-Memory leaks.
Hits per second:
A Hit is a request for information made by a virtual client to the application being tested. In other words HTTP request. It’s based on speed of storage
Web server process model
Hits/Graph:
It shows the number of HTTP Request made by vuser to the web server during each second of the scenario run. This graphs helps us to evaluate the amount of load vusers generate, in turns of the number of hits we can compare this graph to the Average transaction response.
Through Put: The amount of data received by a client from the server in KB.
Is measured on bytes and represent the amount of data that the Vusers received from the server at any given second. The throughput takes into account each resource and its size.
Response time: It is an important client side statistics based on which the performance of the application is measured.
Rendezvous point: it instruct Vusers to wait during test execution for multiple Vuser to arrive at a certain point, in order that they may simultaneously perform a task.
Hits: Defines as a request received by the server.
Vuser Limit: To limit the execution of a transaction to the number of Vuser and not on the number of iteration.
Through Put: The amount of data received by a client from the server in KB.
Is measured on bytes and represent the amount of data that the Vusers received from the server at any given second. The throughput takes into account each resource and its size.
Types of graph:
Web page Breakdown graph: Download Time of scripts, its components & download
Vuser Graph: Running Vuser Graphs
Vuser Summary Graphs
Rendezvous Graphs
Web Resource Graph: plotted against Elapsed time
Merge Graphs: view contents of two Graphs that share a common X axis in a tilled layout one above the other is overlay.
Connection per second Graph: It shows the no of new TC/IP connections (Y Axis) opened & the number of connections that are shut down for each second of the scenario ( X Axis)
Steps in Load Runner Process
1. Requirements:
The requirement collection phase includes
Initial analysis
Study of the application
The deliverable to the client is the Requirement Collection document
Analysis phase includes
Identifying system components
Describing the System Configuration
Analyzing the Usage Model
Defining Testing Objectives
2. Test Plan:
The test plan phase includes
Laying out of the detailed plan that is based on the Requirements Collection document
The deliverable to the client is a Test Plan document and Transaction traversal document with the following contents:
Objective
Resource Requirements (Hardware, Software, People)
Technical Requirements (Features to be & not to be tested, Test Entry & Exit Criteria, Test data)
Customer Requirements
Test Deliverables
Risk Assessment
Limitations and Assumptions
Task Summary
3. Test Design
Test Design phase includes
- Designing and development of the test scripts
- Monitoring tools required for each component in the architecture of the system are identified
Deliverable for this phase
- Test design document, which is an internal document
- Test scripts
4. Test Execution
Test execution phase includes
- Running the actual tests
- Collecting statistics for analysis
- Smoke Run
- Initially performed preliminary run
- Aims at the stability and correctness of load generation
- Aims at results from monitors
5. Analysis of Results
- Most important phase.
- The result data is analyzed for system performance and bottlenecks are identified at the end of the test run
- Deliverable to the client is the preliminary report
§ Focuses mainly on the key findings
§ Recommendations for the current setup
§ (Note: Step4 and 5 are iterative process and is executed until the performance of the application is satisfactory)
6. Report Submission
- At the end of performance tests
§ All the runs performed are correlated
§ Summarized into an Executive summary report
- The deliverable is Executive Summary Report, with the following contents:
§ Objectives
§ References
§ Test Setup (Hardware Configurations, Software Configurations, Tool Settings, List of Transactions, Scenarios Tested, Client machines configurations, Transaction generation pattern)
§ Test Results (List of Significant Runs, Scalability Statistics, Response Time Statistics, Server Side Statistics)
§ Findings, Recommendations and Test Log
§ Appendix
Supporting Environments
• Application Deployment Solution - The Citrix protocol.
• Client/Server - MS SQL, ODBC, Oracle Web Applications 11i, DB2 CLI, Sybase Ctlib, Sybase Dblib, Windows Sockets, and DNS protocols.
• Custom - C templates, Visual Basic templates, Java templates, Javascript, and VBScript type scripts.
• Distributed Components - COM/DCOM, Corba-Java, and Rmi-Java protocols.
• E-Business - FTP, LDAP, Palm, Web (HTTP/HTML), Web Services, and the dual Web/Winsocket protocols.
• Enerprise Java Beans -EJB Testing and RMI-Java protocols.
• ERP/CRM - Baan, Oracle NCA, Peoplesoft 8, Peoplesoft-Tuxedo, SAP-Web, SAPGUI, SAPGUI/SAP-Web dual, and Siebel (Siebel-DB2 CLI, Siebel-MSSQL, Siebel-Web, and Siebel-Oracle) protocols.
• Legacy
Terminal Emulation (RTE).
• Mailing Services
Internet Messaging (IMAP), MS Exchange (MAPI), POP3, and SMTP.
• Streaming
MediaPlayer and RealPlayer protocols.
• Wireless
i-Mode, VoiceXML, and WAP protocols.
• Platforms
• NT, 2000, XP
• Sun
• HP
• IBM
• Linux