Jama Performance Benchmarking 2015.2

By Shane posted 10-28-2015 13:58

  
Jama Performance 2015.2

Performance planning for an on-premises implementation of Jama

Introduction
When Jama administrators think about how to scale Jama across their user base, they often focus on the operating environment, the number of users that will be in the system at any given time, or the projected database size to accommodate the number of requirements, projects and test cases needed for a single Jama instance.  While all of these factors are important to determine how performant the implementation of Jama will be, it is ultimately the combination of all these factors and many others that will determine system performance. 

Jama does offer a SaaS[i] based solution that optimizes performance for many of our customers in an environment that we monitor and maintain for optimal performance.  If you have chosen to manage your own environment, this article is designed to help you scale your implementation and understand the performance we’ve achieved during our internal testing.

The purpose of this document is to outline Jama’s performance test infrastructure and how Jama performs across different versions, configurations, and usage patterns.  So whether you’re new to Jama or you're a seasoned Jama administrator interested in taking Jama to the next level, the information outlined document is intended to help in maintaining your overall system performance.

Determining the scale of a single Jama instance

As mentioned there are multiple factors that impact Jama's performance within your organization [ii].  We have broken these factors down into the following categories.

 1.    Dataset Profile and Size:  This includes the number of projects, requirements, comments, relationships, attachments, etc. that are present in a given implementation.

2.    Usage Patterns and Workflows:  This refers to the actions and activities performed by the people using the system at any given time.

3.    Environment:  This refers to the Jama version and hardware specifications hosting the Jama application and database.  This includes, but is not limited to the network topology, server specifications, local file storage space, memory, operating system, etc.

Jama 2015.2 Performance Results
Jama 2015.2 showcases significant performance improvements that have been made to achieve considerable impact on the performance of large-scale deployments. These sections compare the results of Jama’s performance tests run against a large instance of Jama 2015.1 as our baseline vs. the Jama 2015.2 version. This information can be used to compare the results to an implementation of Jama to better predict the potential improvement may be expected from upgrading to Jama 2015.2.

Baseline Comparison:  This overall test plan consisted of a series of isolated test actions that serve as the baseline set of response times for Jama 2015.1 that could be used as a comparison to the response times for the same test actions in Jama 2015.2.  Other factors such as environment settings, dataset content, and the number of concurrent users were kept static across both test runs to ensure reliable and measurable test results. 

The following chart represents a comparison of average response times between 2015.1 and 2015.2.  These tests were run with 60 concurrent users performing isolated actions using our representative large dataset and AWS (M3 2XL) test environment.

 

The following chart represents the average response times observed while testing the 2015.2 version of Jama.  These tests were run with 60 concurrent users performing isolated actions using the representative large dataset and AWS (M3 2XL) test environment.





Note:  The average response time for each of these common actions occurs within 2 seconds.

Performance Test Conclusions
·  While performing these isolated test actions in Jama 2015.2 against our large dataset and medium system environment we generally saw more than a 50% gain in response times as compared to Jama 2015.1 for commonly used functionality.

·  As outlined in the summary tables above, the improvements made in 2015.2 provide significant benefits for the common user actions within Jama.  This includes searching for items, creating items, filtering items, and posting comments.

·  The performance gains achieved with the 2015.2 release are consistent with the performance optimization work done during the release.  These changes included improvements for permission calculations, caching, chattiness, hibernate listeners, review center optimizations, synchronous calls, and other general refactorings.

·  While running these tests we monitored performance KPI’s for CPU utilization, memory utilization, throughput, and load averages which all remained in target thresholds for optimal system health.

In Conclusion
When testing the most common user actions in Jama 2015.2, there were significant performance gains in the average response times compared to Jama 2015.1. Given how configurable Jama is and how diverse our customers, there is no definitive approach of forecasting exactly how much or how little improvement an individual customer will experience; however, the results of our performance testing indicates there will likely be performance gains within most environments.

 Hopefully, this document has provided insight into some of the factors that will result in performance improvements that can be expected with the 2015.2 release.  It should be noted that this is just a small subset of Jama’s performance testing effort and overall performance will continue to be a key area focus for Jama. Jama will continue to expand our capabilities to test our products and provide more dimensions to performance with future releases.

#installation
5 comments
151 views

Comments

01-14-2016 20:38

You should be able to do so! John, our host, is a technical consultant who has been working with the REST API in-depth for the past few months. If you come up with any specific questions, you can leave them on the post: https://community.jamasoftware.com/jama/topics/ask-jama-rest-api

01-14-2016 19:29

A little.  Hopefully I will learn more about the REST API during the webinar on the 21st.

01-14-2016 19:19

Hi Eileen! I checked in with our QA team to further understand how these metrics were derived and why it seems we're missing stats on the review center. According to Jeremy these benchmarks were the results of Swarm testing, which utilizes the REST API. However, the Review Center doesn't use the REST API, so it wasn't possible to do these sorts of automated tests. Does this help?

01-06-2016 17:55

Eileen, I'm looking into the question about response times in review center. Generally speaking, though, you'll see faster times when you create a new review. Revising a review pulls more information from the system, such as review comments, so it will be a longer process.

01-06-2016 14:31

I notice that the list of items tested does not include anything in the Review Center.  Do you have any information on improvements in response times when using Review center?  For example, is it better to revise an existing review or create a new one, with respect to response times?