Are you doing functional testing and facing these challenges?

  • Does your product have to get to market faster?

  • Is its complexity increasing in every release?

  • Are your release cycles getting even shorter?

  • Do you need to reduce testing costs?  

If you are facing any, or all, of these challenges, then you are reading the right article!

In many conferences, forums, and discussions we hear a lot of talk about new development practices and methodologies to address these issues. But our testing challenges still remain.

AGILE testing is good, but not enough since it does not solve our problem of automating testing so it can be performed rapidly enough. So, how can you get faster time to market with a product that is continually becoming more and more complex, and with constant pressure to reduce cost? It is unlikely that the solution is to continue to use +20 year old testing practices.

In fact, automation of testing by using record and playback is still manual testing. Reducing test automation backlog is good but the reality is, you will never be able to code all your regression test scripts. You have neither the time nor enough budget to accomplish this. So this isn’t the way out.

If we start from this point then, where is the problem? Commonly I see people confusing effectivity or quality of testing with number of tests. How many tests do you have? So what? Are you sure they cover every corner case? Based on your tests, do you know the actual coverage of the functionality you are supposed to test? And if, as in many projects, you have hundreds or thousands of tests, do you know what each of those tests actually does? Do you have time to execute all these tests?  I think you are getting my point: it does not matter how many tests you have but the quality and comprehensiveness of these tests.

When you create tests without a model of the system to be tested, the only way to judge the quality of your tests is the number of tests and maybe, requirement coverage. Now testing with models bring a new dimension to testing and that new dimension is a new way to measure quality:   functional coverage. Functional coverage gives you more information on what you are testing. You will not model tests or test scenarios, but the functionality you want to test captured in a model. It’s always better to describe a landscape by showing a picture than describe it with words.  By graphically modeling your functionality, you will better understand the functionality to be tested, be able to automatically cover all aspects of your functionality, and therefore get better tests. Let the test design tool describe this landscape for you! The tool that operates in this way will test for functions that you did not see the first time or at all because of the complexity of the application, especially the negative paths.

Another big time saver and quality improvement is that a test design tool that generates test cases directly from a model of the expected correct system operation will also generate the test oracles, i.e., the expected results when you execute test cases. So, in the end, you will have more time to test and you will have more time to test your application deeper and KNOW what you have tested and why. As a tester, that confidence is a life saver – or at least a sleep saver. But please be aware and I will repeat that I am not talking about modeling tests but modeling the functionality to be tested. (If you want to know why, please follow this other article: “How can model based testing mbt generate tests that i cannot think of“).

In addition, models will help you to keep on track in every release of your application. In case of a change you can quickly update your model and then update your tests and test scripts automatically by simply generating tests again. Secondly, between application releases you can evaluate your old test suite. What percent of your functionality is covered by your current test suite? You will also be able to generate new tests to increase the coverage of your test suite and see which of the previous test cases become invalid due to the change in the system operation captured in the model. You manage complexity every time you use and update your model. In other words, I recommend you keep your knowledge in a graphical model. Your tool should generate for you test cases based on your modeled functionality and automatically update your test scripts and documentation so you have an impact analysis. You will save lots of time and for sure reduce cost and time to market.

Since the test design tool understands the full system operation to be tested, it will then generate test cases that have 100% coverage of the system’s operation. This is hugely different from creating test cases or scenarios manually and then getting 100% coverage of what you originally wrote. What did you miss? You won’t know until a user trips over it after delivery. Hopefully it won’t be a critical bug.

In conclusion, good test coverage does not mean hundreds and thousands of test cases, but known good test cases based on complete functional coverage. So why not demand 100% functional coverage every time?

Read more on :

Thank you for reading ! Please comment!

Comments are closed