Support

Expand all | Collapse all

Test Case Status evaluation

  • 1.  Test Case Status evaluation

    Posted 07-09-2019 05:00
    Hi Chloe, (I sent this on 17-Jun directly to you but seemed to be lost...)

    Recently I read a sentence in an answer from you, specifically to: "Creating relationships between Requirements and Test Runs".

    This text said: "In this scenario when all the test plans are utilizing the same test case, and everyone is doing test runs, thus passing and failing those test runs, the outcome is: Test Case Status will reflect the latest status of whatever test run in whatever test plan was last ran (pass or fail)."

    Maybe I am taking out of context the words, but as I'm very very affected by this question from the beginning of JAMA's use, and I just would like to clarify it with you.
    Just in case something has changed and then we will be very happy.

    When we started with JAMA I sadly discovered that it evaluated different test Runs executions on different non-archived test plans, as 'the worst'. 
    Unfortunately we work and expected 'the last' as unique criteria.
    This JAMA behavior is creating us a lot of problems, as we can not trust the Test Case Status in our reports, traceability matrices, etc. There is a lot of extra and manual re-work to achieve the actual status.

    So, has this evaluating criteria been modified recently?
    Please say YES.. :-))

    Thank you!
    Manel Moreno

    ------------------------------
    Manel Moreno
    SW Test Manager
    Systelab Technologies/Werfen Clinical SW
    ------------------------------


  • 2.  RE: Test Case Status evaluation

    Posted 07-12-2019 15:15
    Edited by Chloe Elliott 07-12-2019 15:16
    Manel:

    Hello, so sorry it has taken me this long to get back to you! There is a way to keep the Test Case Status distinctive and unique. You are right, it does take extra and manual rework of your workflow to achieve this. Mostly, you need to use the "Reuse" function to duplicate the different test cases in the Explorer tree. When you are done running the Test; record that in the Test Run, directly relate the run to your requirement, Reuse the Test Case (when you reuse, you have the option to "relate" the original item with the new one - you can use this for added traceability), then lock down the original Test Case (either by not using that particular one again or by a user lock item).

    As for your question about modifying this criteria, Test Center is marked on our future roadmap and I know your input here in the Community will help inform those efforts.

    I hope this helps,


    ------------------------------
    Chloe Elliott
    Jama Software
    Portland OR
    ------------------------------