Most of you probably know or have at least heard that Model Based Testing (MBT) is a new innovative technology in which tools help you to create better tests with automated generation, and with Conformiq the generation is automatic plus provides a direct path to automated test execution, but that is not the topic of this blog. What I want to discuss is that in many customer engagements I get the question, “How can Model Based Testing generate tests that I [the tester] cannot think of”?

 

Usually, when testers initially start modeling, they first sketch test scenarios. In a second step, they then refine these scenarios with some data to their model and generate tests. I find this very understandable. As testers, we have already specified so many tests in our careers that for us it seems natural that when we model – we model the tests.

In general, I basically have no problem with this approach. There is no single right way to create a model for testing – there are many. But in this case it is also obvious that this approach will only generate test cases that you already know – because this is exactly what we modeled in the first place. In the end, you get a number of similar test cases but with different data. If this result is acceptable to you then it is fine with me – but then you need to accept that Model Based Testing in this case will not generate tests that you did not think of. Following this approach you can also get requirement traceability, a direct path to test automation and many more test cases. You get 100% coverage of your “model of test cases”.  But what does 100% coverage mean when it comes to the actual functionality to be tested in the first place?  100% of our test model is exactly what percentage of your functionality? Maybe if you spend enough time thinking, analyzing and modeling more and more tests you could reach this goal of 100% coverage – but how much time would that take and what would such a model like? In my experience they have been big and messy and difficult for others to understand and reuse. Besides, how can you be sure that you fully covered the functionality to be tested? How sure are you that even on your bad days you will think of all the corner cases? Do you know which cases you have missed? Probably not because the tests you missed are tests you did not think of.

Next, let’s talk instead about modeling the actual functionality described by the application requirements.  Let’s talk about automatic test design. At a first glance this approach looks more complicated than modeling tests since it requires us to change our thinking. After all, we are testers and have been designing and writing tests for years and because of this experience we also know critical and difficult parts of application operation much better than anyone else. This changes in automatic test design as experience isn’t as critical because testers just model the operation to be tested, e.g., as specified in requirement specifications, from the viewpoint “I’m the application to be tested and this is how I interact with my environment” instead of “I am the tester and this is how interact with the application to be tested”. In this approach we are not thinking in term of tests but in terms of interactions and expected decision making within the application under test itself: “How should I [the application] react when I get this type of data from this interface?” This kind of questioning often reveals poor or unclear requirement documentation before we have a model or have executed a test. Once all of our requirements are reflected in the modeled functionality, the automatic test design tool analyzes the modeled functionality based on modeled data to cover all aspects of the system operation model you [the tester] specified. Then, for the first time, you may find parts of the functionality or data that previously was not covered by test cases – which are exactly these “tests that we previously could not think of”, which are now covered and documented. A computer will not have a bad day. A computer will always “remember” to try all possible corner cases on a given part of functionality. A computer will cover the really complicated operations, especially negative paths, which may be too difficult to take the time to manually design tests with scenarios. In the end, a computer will be able to trace in which test, in which step, a requirement or corner case was covered. Traceability is automatically generated – a mouse click – and no longer a matter of extra analysis of each test in a test set. In the same way, always current documentation is automatically generated.

To conclude, not all approaches to Model Based Testing will give you test cases you didn’t think of as many of these tools are based on automating tests from user developed scenarios. Only automatic test design based on system models does this. At the same time, it is clear that automatic test design requires testers to model the relevant functionality to be tested. It requires testers to identify and steer the computer into the critical parts of this functionality. System modeling is where the tester comes in, as a computer cannot think or read your mind (at least today). This approach does not eliminate testers; it just makes them better and faster. It delivers higher product quality sooner and the more data and functionality you add to your model, the more you will leverage the benefits of automatic test design.

Read more on : www.conformiq.com/blog/

Thank you.

Comments are closed