Implementing automated acceptance or integration tests as part of a project is a very common thing to do nowadays. Even though already in “the earlier days” of software development often some scripting was done to automate certain testing functionality, this has gained a lot more momentum with the move towards Agile Software Development . On one side this has let to much better tool support as there is a huge variety of – freely available – Test Automation Frameworks available nowadays. But on the other side the complexity of the Software under Test (SuT) and also the demand towards the Automated Tests in complicacy and number is increasing.
The following shows a kind of my personal top-5 list of problems that pop up quite regulary in the context of Test Automation for “bigger” (read non-trivial) projects. For sure there is no universally valid solution to those problems, but I nevertheless tried to give some ideas on how to tackle those from my personal experience. If you do not have – or finally overcome – these problems I would be very interested to read more on this in the comments section. The same is of course true for different or additional problems (and potential solutions).
Testdata – Preparing the database
Almost any application nowadays depends on some kind of database. And the automated tests rely heavily on the data in that database to produce repeatable results. Now what are some of the problems with setting up this data?
- Often one is depending on other systems and their database contents which is kind of “out of reach” to influence from within ones own project.
- There is – for different reasons – no dedicated database instance available that can be used exclusively for the automated tests. This can be quite annoying as manual usage might disturb the results of the automated tests and the automated tests might corrupt the data of “other users” of that system.
- Defining and/or finding a proper set of testdata can be difficult and sometimes requires sevaral iterations. Typically the need for testdata is growing together with the advancement in the development of the SuT.
Let’s not consider some more extreme scenarios where a legacy backend application might make use of some quite special database systems like IMS for example. That can make it really hard to initialize testdata from within a Test Framework or some CI-Environment.
The ideas!
The single most important thing to do here is probably communication (which by the way helps with a lot of problems :-)). Often people outside the project (sharing the environment) do not really know what we are doing and aiming for with the automated tests. This is especially important if there is no chance to get an own database instance for the CI-Environment, which is quite often the case. If a shared environment is used it might be possible to define a certain set of testdata that can then be exclusively used in the automated tests. Of course organisational solutions are never perfect, but that can work quite well.
The bigger problem might be setting up the testdata, especially when this can be invalidated by changes in other systems outside the project scope. Trying to find a perfect solution here might sometimes simply be impossible. If the potential changes to the external systems are not happening too frequently a reproducable approach in generating valid testdata might be a good choice. In a recent project we have done this by having reusable scripts to load new data to the system and export this into SQL-Scripts. Those can now be re-executed in the initialize phase of the test execution. Using the Robot Framework we have then created variables that contain the names/ids of that testdata. By using then only the variables in the actual tests it is possible to switch to a new testdata set without too many problems. (Of course this will work in a similar way also with other Test Frameworks.)
One idea that almost always shows up in the context of testdata creatin is using the own application for it. Often you anyway want to test creating new data sets using the application. Then way not use that data for further tests and in the end also test the deletion using the SuT? Well, it almost never works out. First of all it means that the SuT must be able to create all data that is required, which might be often enough the case. But then for testing with a lot of data the tests will become slow and potentially unstable. And even worth all tests will now depend on the tests creating the data. If those fail there is no chance to tell if any other part of the application is working or not. The tests deleting the data again might also fail leading to quite some dead data in the system that might be growing quickly as these tests are executed often.
Probably the bottom line here is that it is better to have a reproducable approach in creating testdata even if it means some re-work every now and then, than strifing for a perfect solution that might take really a lot of time and might still break in the end.
Web-Applications – Fighting with Selenium & Co.
A lot of Test Automation is done for modern web applications. And Selenium is the tool of choice quite often here. This is not bad in itself as the problems would be similar with other tools. Nevertheless Selenium is a kind of synonym for testing – and the corresponding problems – of web applications.
What are typical problems?
- Tests are not stable and show different results without any changes in the SuT.
- The SuT is based on web frameworks that are hard to test. Luckily I even forgot the name of a web framework that was changing id-values of HTML-elements every time a page was accessed.
- Typically the cool things (AJAX) are the ones that are hard to test.
What can be done?
Again this depends of course heavily on the project environment. Any new project development should have Test Automation in mind from the very beginning. Just do not make your life more complicated than it is anyway. Of course working in or expanding existing systems might not offer too many choices with respect to chosen technologies.
“Insanity: doing the same thing over and over again and expecting different results.” -Albert Einstein
One approach that is often missed is to minimize the amount of tests that make use of the GUI. Often we want to test the underlying services and even though it might seem to be more work to implement specific tests accessing the service-layer directly than reusing the GUI for this, it is often the much better solution in the long run. The tests are more stable this way and run a lot faster.
Of course there is always a need to test the application through its GUI as well. But if there are any prerequisites required for certain tests try to perform those not using the GUI (as mentioned above). Again this will minimize the amount of GUI operations using Selenium.
Finally one should of course evaluate new or different approaches for GUI tests if possible. Using the Robot Framework under Java it was for example long time not possible to switch to Selenium2Library as it was only working using Python. But luckily now there is a port available that supports the Java-side of the Robot Framework . This is just an example that shows that one should always keep an eye on the development of the Test Framework one is using. In early phases of a project availability of certain test functionality will for sure also affect the choice of the Test Framework.
CI-Server – Accessing the System
Your tests will run – and potentially fail 😉 – on the CI-environment most of the time. Having a CI-environment is of course an essential part of any test automation solution, but it also can have some stumbling blocks.
- Developers do not have proper access to the system to perform required configuration tasks. This can slow down the whole development to some extend.
- The system is shared among too many projects. This can be especially a problem with the server running the selenium-server. If the tests of one project go mad and is producing for example tons of open browser windows this will have negative impact on all other projects as well.
- There is no possibility to login to all parts of the system – potentially as admin – to perform for example certain tests steps manually to reproduce error situations that do not occur when testing locally.
Unfortunetly these are the kind of problems where often no too many things can be done. Depending on the size of the organisation these things might be simply hard to tackle. In case of problems try to get hold of some “system guy” who can help you accessing the system :-).
Complexity – Lost in Test Automation
The complexity of designing a good test automation suite is often highly underestimated. (This goes then also hand in hand with the next point about the project effort.) Writing the tests in a way that those are still maintainable once there are many of them is one challenge. The other challenge is making the tests as independent from changes in the SuT as they can possibly be.
The tool support for refactoring tests is also often not as good as for a programming language like Java for example. Bigger changes to the structure of the tests might be hard to implement of at least they will require a lot of effort. Therefore the “design-phase” of the test suite is really really important. The following blog post from my collegue Andreas shows one approach to reach this goal . Of course this has always to be adepted to ones own needs.
Another thing to look at is the amount vs. the quality of the tests. It is most often the goal to test important error situations and of course certain “good cases” as well. But is there really a need to have a lot of tests all testing more or less the same functionality in the same way? All these tests have to be maintained as the worth thing that can happen to any test automation is that failed tests are accepted as normal. Be realistic in the amount of tests that can be handled in the project. This will also speed up the overall test execution.
Test Automation – No additional project effort, please
Writing automated tests is effort. And it can even be a lot of effort. Most of the time this effort is very well spent and will pay back by increasing the project quality and saving time in the long run over the lifetime of the project. Nevertheless this is often underestimated and the bad thing that happens then is that it is tried to implement and maintain these tests kind of “by-the-way”. Sometimes this might work, but more often the project will end up with tests that do not run properly or are hard to maintain. Quite some projects might also need to build up competences in writing automated tests (using a certain tool), which again takes some time that must be planned as part of the project.
In an agile environment we should clearly come to the point where writing the automated tests is part of every sprint as it is part of the team’s Definition of Done. Trying to implement the tests at the end of the project is not only a thing that will never work out, but at the end a lot of advantages from having the tests every sprint is then not utilized.
More articles
fromThomas Jaspers
Your job at codecentric?
Jobs
Agile Developer und Consultant (w/d/m)
Alle Standorte
More articles in this subject area
Discover exciting further topics and let the codecentric world inspire you.
Gemeinsam bessere Projekte umsetzen.
Wir helfen deinem Unternehmen.
Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.
Hilf uns, noch besser zu werden.
Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.
Blog author
Thomas Jaspers
Do you still have questions? Just send me a message.
Do you still have questions? Just send me a message.