Test Plan Setup – 1 vs Many For Multi-Release Product
If I may ask for some added information related to my project and considerations, specifically around setup and employment of Test Plans, much like the original question.
My project consists of a main software application under development that will go through iterative releases while capability is added to the product throughout its development cycle until it is complete and fully released.
Each release will have added capability and may be released to the customer as a release milestone towards full acceptance.
Additionally, for some items, they have specific locations they need to be tested, such as in specific LAB's. As an example, a single release may have X verification procedures that will need to occur in LAB 1 and X verification procedures that will need to be conducted in LAB 2 (no procedures require the usage of more than one LAB).
Currently I have built Verification procedures that will cover all requirements. However, each iterative release will only have a given number of requirements completed and able to be verified.
Example:
Release 1 – X requirements satisfied and able to be verification tested for acceptance
Release 2 – X additional requirements satisfied in this release and able to be verification tested for acceptance
Release 3…..
Final Release – Remaining requirements satisfied and able to be verification tested for acceptance
My Points of concern for clarity
- What is the best Test Plan approach? A single Test Plan for the whole Final Release or a "Multi-Test Plan" approach where each test plan covers only a single releases requirements and verifications?
- Separately, if a multi-plan approach is best, recommendations on test plan allocation based on Release, LAB testing location, or both, i.e., Test Plan 1 - Release X - LAB 1, Test Plan 1 - Release X - LAB 2, ect?
- I will need to provide coverage and traceability to the customer for each iterative release given to them. Is one approach better than the other?
- I am taking into consideration the export reports, and looking to simplify the output for requirement traceability and coverage of each release. I can manually combine multiple reports, but if there are easier ways to let JAMA do that for me via proper setup of test plans and configured custom export templates, all the better.
------------------------------
Steve Sutton
L3Harris - ForceX
FL
------------------------------
Original Message:
Sent: 10-27-2023 08:27
From: Geoffrey Clemm
Subject: Managing variants with shared requirements / test runs
The approach I usually recommend for sharing test results between multiple variants of a product is to define the verification of a given product variant to be the results from a specified set of test plans. If the results of a particular test plan apply to more than one variant of a product, then that plan would be included in the test plan list of each of those product variants. If a particular set of test cases can be reused in multiple product variants, but they each need to rerun the tests (since the test results do not apply to all the variants), then you would create a separate test plan for each variant with those test cases, so that each variant would have its own (distinct) set of test results.
------------------------------
Geoffrey Clemm
Jama Software
Original Message:
Sent: 10-26-2023 23:41
From: Dimitrios Pananakis
Subject: Managing variants with shared requirements / test runs
Hello Saby,
Many thanks for sharing your approach. If I understand it correctly you have added extra custom fields to an item of type Test Case which hold the test run execution details (ID of latest Test Run, Status of latest Test Run, Test Plan of latest Test Run, etc)
So after you perform test runs on project A, you are
- updating those custom field Test Cases on Project A
- syncing those Test Case items back to the Parent(Common) Project
- syncing those changes from the Parent(Common) Project to Project B
Are the above assumptions correct?
Warm regards
Dimitrios
Original Message:
Sent: 10-26-2023 03:09
From: Szabolcs Agai
Subject: Managing variants with shared requirements / test runs
Hello Dimitrios,
Thank you for the feedback and for asking.
There are many ways to that, one is to copy over Test Run contents to the Test Case item and carry that forward to newer product development projects. New product projects can decide than it they rely on previous test results or not and what process they want to build around of reusing test run results.
Either an XLS roundtrip (bit tedious) or using Jama Connect Interchange XLS function can take us there.
Let me share a simple example on this.
Have a great day
Saby

------------------------------
Szabolcs Agai
Original Message:
Sent: 10-26-2023 01:55
From: Dimitrios Pananakis
Subject: Managing variants with shared requirements / test runs
Hello Saby,
Thank you for sharing this. I am struggling to see how this implementation can cover the requirement to have common test runs (if those runs are executed on Project A they should be available on Project B, or vice versa.
Best regards
Dimitrios
Original Message:
Sent: 10-25-2023 07:40
From: Szabolcs Agai
Subject: Managing variants with shared requirements / test runs
Good afternoon Jacob,
Great topic, thank you for asking.
Your use scenario can be addressed with the Reuse functionality.
Your option C., is that could be good starting point to work out a solution.
Let me share a diagram that outlines a potential configuration approach to your problem.

Have a great day
Saby
------------------------------
Szabolcs Agai
Original Message:
Sent: 04-11-2023 02:58
From: Jacob Brodal
Subject: Managing variants with shared requirements / test runs
Hi Jama experts,
Hope you can offer some best practice for the following scenario OR guide me to existing threads with same scenario.
We currently have a system which contains of a set of modules. This makes up "Product A" with test runs on both module and system level.
We plan to create a new variant of the product (same product family), "Product B", with added capabilities, but re-using a lot of the existing documentation (requirements, test protocols, even test runs). The Product B will also be subjected to external submissions, thus anything created in Jama should be distinguishable and easily exported (from baseline).
Here is the question:
How should Jama be utilized to best distinguish between Product A and Product B?
a. By tagging requirements that are unique to each variant and creating separate baselines for each variant
b. By using the categories option to indicate which variant a given item relates to
c. By using re-use and sync to separate the variants, thus adding distinct items to each "jama project"
d. By distinguishing within test plans, e.g., separate test plans for each variant
e. Something else?
It is important to note that existing test runs from Product A could also be applicable to Product B, i.e., still valid without re-run.
Below I have outlined how it would look with having Product A and Product B within the same Jama project:

------------------------------
Jacob Brodal
Systems Engineer
3Shape
------------------------------