Risk-based scoring
Often time we use % tested as a decision criteria which is not very accurate as it doesn't factor in total risk.This is a simple really. For any test plan generated, based on test cases covered and risks introduced, give me 1 number expressing how the risk would manifest itself in the real world How I imaging this working is as follows.
- Test cases are organized by features they test.
- Each feature is prioritized by their business value. (how much marketing/sales think they would have to lower the price if this feature was missing.)
- Each Software module (package, assuming your devs modularize their code well) will be given a Diff score, based on how many lines of code has changed.
- Multiply for all features, items 2 and 3, then add it up to get a risk profile. Then subtract out the score from the features in which you've completed running the test cases under those features. Divide the 2 to get a percentage.
- Multiply this final risk number, and what you have is how many dollars you have at risk for releasing with the current state of untested features.
I think this is a killer feature. It'll give business a good real world correlation, instead of some % test cases covered. In theory, testing should be finished when the cost of testing costs more than the profit of the additional quality. Guess what, with this level of risk based scoring, you have the numbers to do that comparison.
Automatic test case weighting/prioritization
One of the annoying things about being a test lead is coming up with a test plan, getting sign off from developers who often think it's a waste of time, then having to write paragraphs explaining to management justifying why X level of test case coverage is necessary.Using the above risk-based score as the basis. Wouldn't it be nice if a test plan was automatically put together based on which test cases reduce risk the most? This will use a combination of Developer check-ins and Bugs filed to keep Features weighted accurately.
Here's the Source Control Integration will work:
- The project inside the test management tool can be configured to do daily pulls from Version control.
- It will count the number of lines of code change, and number of interface changes introduced.
- Then for each module that has changed, it'll adjust the component/package diff score (#3 above) automatically.
- Features will then be automatically re-weighted and test cases testing those features will be moved to the top of the test plan.
Here's how the Bug Integration Will work:
- Each time a bug is filed, the bug will be tagged/labled with a Feature and given a Business Value (some theoretical number of how much in sales or added cost in support this bug caused).
- Every week, a automatic task will crawl through new bug reports, and compare the current business value assinged to the feature against the amount of business value reported by the Bug.
- If the business value difference between the two exceeds a certain margin, the business value of the Feature will be automatically adjusted up or down.
With both business value and diff score automatically adjusted. Test cases can easily be ordered by the features they're associated with. This means no more time tested in risk assessment meetings and test planning.
Ability to suggest tests that might be needed.
When working on a new feature, you might wonder what types of test might be needed. Or you might be in such a huge rush, some things might slip your mind. Wouldn't it be nice if your TCM can suggest test cases for you?
Here's how I think it would work:
- Tests are labled with various tags/labels like Database Test, Performance Test, Security Test, Usability Test, etc...
- As you modify or create code in source control. The TCM will look at your past commits, examine which packages or adjacent packages were altered.
- It will then search through the TCM database, and look at which features are affected by those smilar modules, then will then examine the types of tests (based by how they're labled).
- The TCM will then suggest labels and ask you to consider creating test cases for your new feature suggesting those labels.
Image if you will, you're writing test cases for a new feature, which includes a database change. The TCM tool will notice that you have another feature that changed a different file in the Database module, and you have SQL Injection test as one of your test cases for that feature. The TCM will give you a friendly suggestion, "I've noticed that there is a 'Database Module' change, you may want to consider some 'SQL Injection Test'."
No comments:
Post a Comment