Component (service) testing

In the microservice context, we can say that any service itself can be considered as a component. Here, we can say that testing the whole microservice is component testing. As per its definition, a component is any well-encapsulated, coherent, and independently replaceable part of a larger system.

This testing has to be done on an already deployed microservice, which is supposed to be a good representation of the production environment. So, this service or component definitely can't hold a specific logic for testing.

As it includes real network calls or database calls, a database can point to testing the database through configuration injection. It actually increases the work for the tester to start and stop the test databases and the various other stubs and configurations. As it runs in a close-to-real environment, it will face the reality issue too, which we think is not necessary to handle at this stage; for example, it can have a database interaction, which can be time consuming for a testing plan, or a network call, which is also time consuming and sometimes even faces the packet loss issue.

One option to overcome this is by using a test clone of the original external service. Now, the question is, whether these test clones should be injected by the testing environment or they should come into the RAM by code. One way to handle this is that it can be controlled by the test configuration. For databases, if you have key-based database sharding in place, then using any particular keywords in I'd like TEST helps you store test data in a different database. This way, we will not mix the production and test data together in the same data base while testing on the production environment. Although, it is not recommended to test on a production environment. But there could be a case where you can not avoid sanity testing on production. You may wonder, is this the correct way or should any in-memory database be invoked during tests? Should there be a real network call? Should a deployed artifact have any loophole code for testing in production? And you may have more such doubts.

Including all of this, our main problem is still the same. Any call to a database or third-party is time consuming. Let's assume that we create these test clones and databases in memory. The first and the utmost benefit is that there will be no network calls and there will be a reduction in the time taken to run the test case. Imagine that you have thousands of test cases or selenium UI test cases. They can take 5 to 6 hours to run depending on the number and complexity of test cases, also test cases, running time will increase if they are communicating over a network and adding data in the database. Doing this will make test cases run faster and the build complexity of components can be at ease. To reduce the headache of network calls, there are prebuilt libraries to solve the problem; for example, we have the inproctester for JVM and plasma for .NET. This way, we can make component testing even closer to reality. The same can be done with a database; any in-memory database such as H2 can help with this. Other than that, any database that has embedded versions such as Elasticsearch and Neo4j can help in this context.

Making all these arrangements enables microservice testing in an isolated environment. This also gives us much more control over the testing environment and makes it easy to replicate the issue reported to occur in production. We can put a hack in the code for requests such as a certain pattern found in the request that should be served by an in-memory test clone, and which patterns should go to in-memory database clones, that should be handled by some configurations outside the application.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset