Performance Test Methodology

By Shane posted 10-29-2015 14:38

  

Jama Performance Testing methodology

The following sections detail the testing environment, including hardware specification and methodology that were used to perform the performance tests in the Jama Performance Benchmarking 2015.2 article.

Dataset Profile and Size:

Before beginning the performance testing, we surveyed our customers to form a picture of our customers' environments and the difficulties they were facing when scaling Jama in an enterprise organization. The table below outlines the profile for three configurations for dataset size based on customer feedback and our own knowledge of customer data configurations.  We performed tests for each dataset size but focused on the large configuration for the results covered in this report.  It should be noted that Jama will continue to revisit the configurations outlined below to ensure they are representative of our customer’s environments.

 Chart 1.1

* Data profile used to perform tests contained in this report

Usage Patterns and Test Actions:

In addition to the dataset and configurations, Jama also established a common set of user actions a performed in the product.  An "action" in this context is an isolated activity or operation a user can perform such as “creating a requirement” or “searching for an Item”. 

 The following table details the actions included in the test scripts as well as the minimum number times each action is repeated during a single test run and the associated wait time set between each test action per concurrent user.  The minimum number of times each test action must be performed in a single test run was established to ensure there is a measurable run.  The result must be within 10% between two comparison runs between 2015.1 and 2015.2.

Chart 1.2

 

Environment:

The performance testing was conducted on isolated systems in Jama’s QA environment in order to control the results of our testing.  For each test action, the entire environment was reset and each test started with a brief idle cycle to load the cache and concurrent usage. To run these tests, an automation framework was used to exercise each isolated test action through a REST API call. The response times were then measured for every test action as well as monitoring system KPI’s (CPU Utilization, Memory Utilization, Load Averages, Throughput, etc.) throughout the test session to ensure healthy threshold for system performance.  

Each test action is executed per concurrent user according to a wait factor between each test (Chart 1.2).  Please note that this approach simulates a larger and more consistent load than would be expected in a typical user environment or real world scenario.  This was done to demonstrate performance improvements relative to code changes in specific areas and to achieve reliable and repeatable results that are deterministic.  For instance, the tests included Creating/Deleting/Retrieving/Updating Requirements every 60 Seconds for each concurrent user in a given test session, which provides a higher level of confidence that the load placed on the system exceeds what a typical user would experience.

The table below provides the environment specifications used in our test environment:

Chart 1.3

 

·  System profile used to perform tests contained in this report

·  These headings, although similar to what is in our User Guide, are used for different purposes and not intended to represent the same information.



#installation
0 comments
121 views