We live today in a world where we can quickly establish a call from our computer to people living on the opposite side of the world. Our applications can be used anywhere, even in places we have never heard of! With the 21th century, we entered an era of big challenges around internationalization of products. How can tests deal with so many languages, since support of a new language can be mandatory really quickly?

If you are not using model-based testing, you should really take a look at the technology. You can model your functionality to be tested at a higher level of abstraction than your application under test, and let an engine automatically generate your test cases for you. If you are already using model-based testing, perhaps you are wondering: “How can a model reflect the internationalization of my product?” There is more than one solution to this problem, here I share one of the solutions I think is the easiest.

The beauty of model-based testing is that with a high-level model, automated test generation can be done easily, and we can use this to deal with our multi-language challenge. When I model my application, I express the functionality at a higher level of abstraction using a “placeholder” concept. I mean here that I do not check specific strings in my model, but describe for example: “Find in my login page, the form field names equivalent to “login” and “password”; in any target language.” I do not check the specific text “login” and “password” because this is correct only in English. I simplyleave the language problem for automation to deal with.

So, leaving this challenge to the test execution automation, when I generate test cases for this example I follow a schema like this: In my login page I must check a form with the equivalent of “login” and “password.” Now I can run this test once by replacing the placeholders (for example, based on mapping described in an Excel spreadsheet with the English equivalents) and generate the English version “find in my login page a form fields named “login” and “password.” When I add another column for French, my test execution tool will generate a test case with the same logic, but checking in my login page a form for “nom d’utilisateur” and “mot de passe”. So here, I let the test execution framework deal with the specific language, and my model is generic!

Now, some applications like a TV set-top box, web applications, etc. have some menus or parts of menus only present with a specific country or language setting, but not with others. We can use now well-known techniques in model-based testing: based on the earlier language selection in the beginning of our flow, we block some flow branches and this way prevent tests from being generated for these settings. It is worthwhile and interesting to know the exact coverage for each language you have achieved.

In conclusion, abstraction of data is the key to internationalization of your product. In the long term, and for scalability, it’s better to maintain a generic model with decision-making based on the selected language, rather than one model per language. And to learn from this that the model is not necessarily the only place where you need to think about automation – leveraging the interleaving of automatic test generation and automatic test execution can also help increase your automation of the test process. So use this trick in your modeling, and solve your internationalization problem in an easy yet scalable way.

Thank you for reading! Please comment!

Read more on : https://www.conformiq.com/category/blogs/

If you have a specific topic, please let me know!

Comments are closed