With new software products that are conceptualized every day, the designers and the architects need to decide on the architecture paradigm that needs to be used to develop the product. The architecture landscape is being shifted from the previous monolithic style of architecture to a more independently deployable, easy to scale and a more focussed microservice based architecture. This paradigm shift warrants a shift in the testing focus too.
In this article, I will capture the various kinds of test, and some of the gotchas’ associated while testing the product whose underlying architecture is microservice driven, and some of the things we have to avoid when working with many cross-functional teams in a large-scale product.
Test pyramid can be used as a rule of thumb in categorizing different kind/complexity/level of tests at a high level and in determining the number of tests we want to target for each level. The pyramid indicates that 70% of the overall number of tests in the system should be unit tests, 20% should be service layer tests or acceptance criteria tests and the 10% must be the end to end flow tests. We cannot be very insisting on this rule for every user story, but we should use this as a guideline to ensure the unit tests are more than service layer tests, which are in turn more than the end to end integration tests.
This is to ensure maintainability of tests at different levels and to isolate the tests to prevent duplication, this facilitates easy refactoring and extension of the code too, when necessary.
The complexity, granularity, cost and the time to run the tests increase as we reach up to the top of the pyramid. (Fig 1. Test Pyramid)
A unit test is the smallest granular independent test that can test a piece of code in isolation. This can be used to test a line of code or a few lines of code together. Unit tests tend to be very fast, in fact, thousands of tests can be run in a matter of seconds. As a result, they form a very integral part of the feedback loop. They will report failure in times of regression. Because they are very granular, identifying the regressed code and debugging is easy, in other words, unit tests help in easy identification of where the fault lies in the software, and what part of software has changed since the code was last updated. The tight scope of tests will ensure the tests do not bleed or go beyond the boundaries.
Unit tests are the least expensive of all kinds of testing both in terms of cost and time, while you are testing a product built on micro-services
Of course, the onus of making the unit tests light lies on the designer. One must be careful to avoid heavy mocking and stubbing. Where needed and possible, the software’s dependency must be called directly within the tests. Having reduced/no mocks have a few benefits: avoids the build-up of complexity in the tests, and makes it easily understandable.
Service layer testing/Acceptance testing
Service layer testing is used to verify a certain business logic or a specific API call or an interface or a certain specific behavior provided as a service. Given a microservice is meant for a single and a unique business case, this layer of testing pretty much tests all components within a micro-service. This layer of testing sits in between the unit test and the end to end tests and the service layer tests the interactions between two or more microservices or an end to end testing of the microservice encompassing all layers within a microservice. These tests should verify that the communication channels between various components are thorough and the interfacing between different components within the microservice works as expected.
Unit tests verify the smallest level of functionality, but if you want to test the functionality that is covered by more than 2 unit tests, which tests the API or an interface or a small service, then we turn to Service layer testing.
We can use various tools to do this kind of testing, to test the service, one such tool I recommend is Cucumber.
Cucumber enables Behavior-driven development (BDD) testing and can help us translate business logic directly to code, using the same language as used in the acceptance criteria.
BDD enables the product owner to think about the exact behavior of the system and identifying the exact input and the output, which then gives a clear picture to the team and the designer who is to implement the code. It bridges a massive communication gap, yet another positive side-effect is: BDD enables better story estimation and predictability. It's definitely a great aid in translating user stories to functional tests, eliminating the excessive need for documentation.
This service layer testing can be further split into (as shown in Fig 2. Test pyramid for microservices)
- Integration tests: verifies integrating components like message queues, caches. Database interactions. The granularity might vary depending on the components we want to test.
- Components tests: A component is a small subset of the overall functionality which is built for a specific purpose and has clear boundaries. This itself can be a microservice or many such components together form a microservice. These tests verify the functionality of a given component. Testing individual components thoroughly enables less spill over to the end to end testing layer and as a result, prevent tests being expensive. Component tests give more confidence than unit testing, as they test a bigger chunk of code and functionality than unit tests.
- Contract tests: verifies the interactions of the component boundaries like the APIS /interfaces. They ensure backward compatibility is maintained across various versions of the software.
This is the test that gives a lot of assurance to the external stakeholders as it gives them the confidence that the system works end to end as expected. And it is also the most exploited testing layer. And when this goes unnoticed, the pyramid can easily become an inverted one or can easily lead to ice-cream cone kind of shape or an hourglass shape. The end to end testing pretty much evaluates whether the system behaves as the customers/stakeholders wanted. For instance: channeling a UI request and ensuring we get the right response back after going through different components and database layer.
This is the costliest of all kinds of test as it involves a test environment set up very similar to the production and the time it takes to run these tests is much more when compared to unit and service layer tests. The feedback loop is slow here, as one has to wait till the development of all layers to do the first end to end test. If there is a long lead time for the components to come together, there is a risk that the components might make certain assumptions about the various interfaces/APIs/inputs. To avoid wrong assumptions, it is definitely worth investing in upfront analysis on what the contract of each component is and ensure the exception/ error guidelines to prevent any confusion on the reporting.
There are other tests needed, apart from functional verification tests that test the functionality delivered by the feature end-to-end, these are nonfunctional tests: exploratory tests which are manual, scalability tests to verify the microservices the elasticity of the microservice, performance tests, robustness and availability tests on how the product is able to sustain unplanned incidents like power outage, clean starts, upgrades etc
Nevertheless, we need to ensure we capture the crucial use cases and ensure they meet the requirements from the Product owners/stakeholders/customers.
With all kinds of tests, we do need to ensure we capture negative test cases too. As it's not a happy scenario always at the customer site. The right kind of tests should verify the functional and non-functional aspects of the product, with good exception and error handling, and most importantly reporting. Together with positive and negative tests, the test suite should be comprehensive and give confidence that the system behaves as expected. There are few things when considered during test implementation can help save more maintenance cost and make it faster. They are:
Unit tests are automated during development phase quite easily. But as we go up the pyramid the pace of automation reduces and also the number of tests automated reduces. Automation fails mainly because of:
- The complexity involved in bringing two or more services together to enable this automation. It is, for this reason, we have to automate so that we can know if we are depending on a lot of modules/libraries and helps us eliminate transitive dependencies.
- The project and the program not enabling bandwidth to the development teams to write tests or the lack of understanding in the teams/management on writing good tests that can prevent fault slippage
- Of course, unlike traditional testing use-cases, with microservices, not all scenarios can be automated, especially when it the tests depend on the distributed environment, frameworks/3PP/simulations that are available only in the production site, in such cases we need to look at tools that can inject faults similar to real-world use-cases. Tools like Byteman help in such fault injections. Byteman is a tool which makes it easy to trace, monitor and test the behavior of Java application and JDK runtime code. There are other such tools for the technology you have chosen, do explore.
- Tests should be easily understandable by everyone, often making it difficult leads to a lot of rewrites and duplication. Using tools like cucumber enable this. It is difficult to write a wrong test using cucumber. They are easy to read, understand, discuss and promote reusability.
2. Elimination of non-value tests
With various layers of testing comes the bane of tests being repeated and often this duplication costs a lot of time and money. Cost in terms of the test environments and the time it takes the designers to write them. A clear test strategy should be implemented to ensure there is value in the tests produced from each layer of the pyramid. We cannot completely avoid overlapping, we should be prudent in evaluating this, and taking informed decisions.
From our traditional test experience, we might sometimes think that certain kind of tests should reside in a certain layer, for instance assuming UI tests can be done only at the end-to-end testing stage is not true. In fact, UI can be tested across all layers.
3. Getting rid of unstable tests
There are tests that do not produce results consistently over the runs, they are pretty unstable/flaky or non-deterministic. Maintaining these tests is a problem for designers and for DevOps engineers, because the results, either positive or negative doesn't mean anything, but in turn consume a lot of time trying to reproduce and most often fail to reproduce, as the next run would have overwritten the logs, or the tests are dependent on multiple kinds of different loads/applications running, on the test environments, etc.
4. Impact on Story cycle time
Delaying the planning of the tests to the later stage in the project implementation will put a high risk on the project in terms of the cycle time and turnaround time. So the tests with long lead time either in setting up the environment or implementing the initial framework should be planned for and started in advance, to facilitate the overall project plan on time with quality
5. Keep the tests up-to-date
Let's say despite having right tests at the right layers, there is still a fault that has slipped. Along with the code fix being implemented and delivered, we need to fix the tests too. In order to prevent this failure from occurring again, we need to have the test added in the right layer. Typically if it's a small code fix, adding a new unit test suffices. Again judgment should be used to determine the layer in which the test should be added, in order to get maximum value.
There will be more learnings if we carefully watch out for, with an aim set to constantly improve.
This blog is brought to you in partnership with ASPE.