Wednesday, May 15, 2013

My 3 Test Management Dream Features

Having done research on various test management systems for a previous employer, and now having to recommend one to my current employer.  It's got me thinking of things I'd like to see in a Test Management System.  Figure I'd write a quick post on 3 of the more esoteric things I'd like to see, that no tool out in the market implemented yet, and how I imagine they work.  I think these features should be fairly easy to implement and incorporate into an existing TMS tool, if given the motivation.


Risk-based scoring

Often time we use % tested as a decision criteria which is not very accurate as it doesn't factor in total risk.

This is a simple really.  For any test plan generated, based on test cases covered and risks introduced, give me 1 number expressing how the risk would manifest itself in the real world How I imaging this working is as follows.

  1. Test cases are organized by features they test.   
  2. Each feature is prioritized by their business value.  (how much marketing/sales think they would have to lower the price if this feature was missing.)
  3. Each Software module (package, assuming your devs modularize their code well) will be given a Diff score, based on how many lines of code has changed.
  4. Multiply for all features, items 2 and 3, then add it up to get a risk profile.  Then subtract out the score from the features in which you've completed running the test cases under those features.  Divide the 2 to get a percentage.
  5. Multiply this final risk number, and what you have is how many dollars you have at risk for releasing with the current state of untested features.
I think this is a killer feature.  It'll give business a good real world correlation, instead of some % test cases covered.  In theory, testing should be finished when the cost of testing costs more than the profit of the additional quality.  Guess what, with this level of risk based scoring, you have the numbers to do that comparison.

Automatic test case weighting/prioritization

One of the annoying things about being a test lead is coming up with a test plan, getting sign off from developers who often think it's a waste of time, then having to write paragraphs explaining to management justifying why X level of test case coverage is necessary.

Using the above risk-based score as the basis.  Wouldn't it be nice if a test plan was automatically put together based on which test cases reduce risk the most?  This will use a combination of Developer check-ins and Bugs filed to keep Features weighted accurately.

Here's the Source Control Integration will work:

  1. The project inside the test management tool can be configured to do daily pulls from Version control.
  2. It will count the number of lines of code change, and number of interface changes introduced.
  3. Then for each module that has changed, it'll adjust the component/package diff score (#3 above) automatically.
  4. Features will then be automatically re-weighted and test cases testing those features will be moved to the top of the test plan.

Here's how the Bug Integration Will work:

  1. Each time a bug is filed, the bug will be tagged/labled with a Feature and given a Business Value (some theoretical number of how much in sales or added cost in support this bug caused).
  2. Every week, a automatic task will crawl through new bug reports, and compare the current business value assinged to the feature against the amount of business value reported by the Bug.
  3. If the business value difference between the two exceeds a certain margin, the business value of the Feature will be automatically adjusted up or down.
With both business value and diff score automatically adjusted.  Test cases can easily be ordered by the features they're associated with.  This means no more time tested in risk assessment meetings and test planning.

Ability to suggest tests that might be needed.

When working on a new feature, you might wonder what types of test might be needed.  Or you might be in such a huge rush, some things might slip your mind.  Wouldn't it be nice if your TCM can suggest test cases for you?

Here's how I think it would work:
  1. Tests are labled with various tags/labels like Database Test, Performance Test, Security Test, Usability Test, etc...
  2. As you modify or create code in source control.  The TCM will look at your past commits, examine which packages or adjacent packages were altered.
  3. It will then search through the TCM database, and look at which features are affected by those smilar modules, then will then examine the types of tests (based by how they're labled).
  4. The TCM will then suggest labels and ask you to consider creating test cases for your new feature suggesting those labels.
Image if you will, you're writing test cases for a new feature, which includes a database change.  The TCM tool will notice that you have another feature that changed a different file in the Database module, and you have SQL Injection test as one of  your test cases for that feature.  The TCM will give you a friendly suggestion, "I've noticed that there is a 'Database Module' change, you may want to consider some 'SQL Injection Test'."


Please comment and let me know which are your 3 dream features.

No comments: