Chapter 14. Technologies for Nanoservices

Section 14.1 discusses the advantages of nanoservices and why nanoservices can be useful. Section 14.2 defines nanoservices and distinguishes them from microservices. Section 14.3 focuses on Amazon Lambda, a cloud technology that can be used with Python, JavaScript, or Java. Here each function call is billed instead of renting virtual machines or application servers. OSGi (section 14.4) modularizes Java applications and also provides services. Another Java technology for nanoservices is Java EE (section 14.5), if used correctly. Vert.x, another option, (section 14.6) also runs on the JVM but supports a broad variety of programming languages in addition to Java. Section 14.7 focuses on the programming language Erlang, which is quite old. The architecture of Erlang enables the implementation of nanoservices. Seneca (section 14.8) has a similar approach as Erlang but is based on JavaScript and has been specially designed for the development of nanoservices.

The term “microservice” is not uniformly defined. Some people believe microservices should be extremely small services—that is, ten to a hundred lines of code (LoC). This book calls such services “nanoservices.” The distinction between microservices and nanoservices is the focus of this chapter. A suitable technology is an essential prerequisite for the implementation of small services. If the technology, for instance, combines several services into one operating system process, the resource utilization per service can be decreased and the service rollout in production facilitated. This decreases the expenditure per service, which enables support of a large number of small nanoservices.

14.1 Why Nanoservices?

Nanoservices are well in line with the previously discussed size limits of microservices: Their size is below the maximum size, which was defined in section 3.1 and depends, for instance, on the number of team members. In addition, a microservice should be small enough to still be understood by a developer. With suitable technologies the technical limits for the minimal size of a microservice, which were discussed in section 3.1, can be further reduced.

Very small modules are easier to understand and therefore easier to maintain and change. Besides, smaller microservices can be replaced more easily by new implementations or a rewrite. Accordingly, systems consisting of minimally sized nanoservices can more easily be developed further.

There are systems that successfully employ nanoservices. In fact, in practice it is rather the too large modules that are the source of problems and prevent the successful further development of a system. Each functionality could be implemented in its own microservice—each class or function could become a separate microservice. Section 9.2 demonstrated that it can be sensible for CQRS to implement a microservice that only reads data of a certain type. Writing the same type of data can already be implemented in another microservice. So microservices can really have a pretty small scope.

Minimum Size of Microservices is Limited

What are reasons against very tiny microservices? Section 3.1 identified factors that render microservices below a certain size not practicable:

• The expenditure for infrastructure increases. When each microservice is a separate process and requires infrastructure, such as an application server and monitoring, the expenditure necessary for running hundreds or even thousands of microservices becomes too large. Therefore, nanoservices require technologies that make it possible to keep the expenditure for infrastructure per individual service as small as possible. In addition, a low resource utilization is desirable. The individual services should consume as little memory and CPU as possible.

• In the case of very small services a lot of communication via the network is required. That has a negative influence on system performance. Consequently, when working with nanoservices communication between the services should not occur via the network. This might result in less technological freedom. When all nanoservices run in a single process, they are usually required to employ the same technology. Such an approach also affects system robustness. When several services run in the same process, it is much more difficult to isolate them from each other. A nanoservice can use up so many resources that other nanoservices do not operate error free anymore. When two nanoservices run in the same process, the operating system cannot intervene in such situations. In addition, a crash of a nanoservice can result in the failure of additional nanoservices. If the processes crash, their crash will affect all nanoservices running in the same process.

The technical compromises can have a negative effect on the properties of nanoservices. In any case the essential feature of microservices has to be maintained—namely, the independent deployment of the individual services.

Compromises

In the end the main task is to identify technologies that minimize the overhead per nanoservice and at the same time preserve as many advantages of microservices as possible.

In detail the following points need to be achieved:

• The expenditure for infrastructure such as monitoring and deployment has to be kept low. It has to be possible to bring a new nanoservice into production without much effort and to have it immediately displayed in monitoring.

• Resource utilization, for instance in regard to memory, should be as low as possible to enable a large number of nanoservices with little hardware. This does not only make the production environment cheaper but also facilitates the generation of test environments.

• Communication should be possible without the network. This does not only improve latency and performance but increases the reliability of the communication between nanoservices because it is not influenced by network failures.

• Concerning isolation, a compromise has to be found. The nanoservices should be isolated from each other so that one nanoservice cannot cause another nanoservice to fail. Otherwise, a single nanoservice might cause the entire system to break down. However, achieving a perfect isolation might be less important than having a lower expenditure for infrastructure, a low resource utilization, and the other advantages of nanoservices.

• Using nanoservices can limit the choice of programming languages, platforms, and frameworks. Microservices, on the other hand, enable, in principle, a free choice of technology.

Desktop Applications

Nanoservices enable the use of microservice approaches in areas in which microservices themselves are hardly useable. One example is the possibility of dividing a desktop application in nanoservices. OSGi (section 14.4) is, for instance, used for desktop and even for embedded applications. A desktop application consisting of microservices is, on the other hand, probably too difficult to deploy to really use it for desktop applications. Each microservice has to be deployed by itself, and that is hardly possible for a large number of desktop clients—some of which might even be located in other companies. Moreover, the integration of several microservices into a coherent desktop application is hard—in particular if they are implemented as completely separated processes.

14.2 Nanoservices: Definition

A nanoservice differs from a microservice. It compromises in certain areas. One of these areas is isolation: multiple nanoservices run on a single virtual machine or in a single process. Another area is technology freedom: nanoservices use a shared platform or programming language. Only with these limitations does the use of nanoservices become feasible. The infrastructure can be so efficient that a much larger number of services is possible. This enables the individual services to be smaller. A nanoservice might comprise only a few lines of code.

However, by no means may the technology require a joint deployment of nanoservices, for independent deployment is the central characteristic of microservices and also nanoservices. Independent deployment constitutes the basis for the essential advantages of microservices: Teams that can work independently, a strong modularization, and as consequence a sustainable development.

Therefore, nanoservices can be defined as follows:

• Nanoservices compromise in regard to some microservice properties such as isolation and technology freedom. However, nanoservices still have to be independently deployable.

• The compromises enable a larger number of services and therefore for smaller services. Nanoservices can contain just a few lines of code.

• To achieve this, nanoservices use highly efficient runtime environments. These exploit the restrictions of nanoservices in order to enable more and smaller services.

Thus, nanoservices depend a lot on the employed technologies. The technology enables certain compromises in nanoservices and therefore nanoservices of a certain size. Therefore, this chapter is geared to different technologies to explain the possible varieties of nanoservices.

The objective of nanoservices is to amplify a number of advantages of microservices. Having even smaller deployment units decreases the deployment risk further, facilitates deployment even more, and achieves better, understandable, and replaceable services. In addition, the domain architecture will change: A Bounded Context that might consist of one or a few microservices will now comprise a multitude of nanoservices that each implement a very narrowly defined functionality.

The difference between microservices and nanoservices is not strictly defined: If two microservices are deployed in the same virtual machine, efficiency increases, and isolation is compromised. The two microservices now share an operating system instance and a virtual machine. When one of the microservices uses up the resources of the virtual machine, the other microservice running on the same virtual machine will also fail. This is the compromise in terms of isolation. So in a sense these microservices are already nanoservices.

By the way, the term “nanoservice” is not used very much. This book uses the term “nanoservice” to make it plain that there are modularizations that are similar to microservices but differ when it comes to detail, thereby enabling even smaller services. To distinguish these technologies with their compromises clearly from “real” microservices the term “nanoservice” is useful.

14.3 Amazon Lambda

Amazon Lambda1 is a service in the Amazon Cloud. It is available worldwide in all Amazon computing centers.

1. http://aws.amazon.com/lambda

Amazon Lambda can execute individual functions that are written in Python, JavaScript with Node.js, or Java 8 with OpenJDK. The code of these functions does not have dependencies on Amazon Lambda. Access to the operating system is possible. The computers the code is executed on contain the Amazon Web Services SDK as well as ImageMagick for image manipulations. These functionalities can be used by Amazon Lambda applications. Besides, additional libraries can be installed.

Amazon Lambda functions have to start quickly because it can happen that they are started for each request. Therefore, the functions also may not hold a state.

Thus there are no costs when there are no requests that cause an execution of the functions. Each request is billed individually. Currently, the first million requests are free. The price depends on the required RAM and processing time.

Calling Lambda Functions

Lambda functions can be called directly via a command line tool. The processing occurs asynchronously. The functions can return results via different Amazon functionalities. For this purpose, the Amazon Cloud contains messaging solutions such as Simple Notification Service (SNS) or Simple Queue Service (SQS).

The following events can trigger a call of a Lambda function:

• In Simple Storage Service (S3) large files can be stored and downloaded. Such actions trigger events to which an Amazon Lambda function can react.

• Amazon Kinesis can be used to administrate and distribute data streams. This technology is meant for the real time processing of large data amounts. Lambda can be called as reaction to new data in these streams.

• With Amazon Cognito it is possible to use Amazon Lambda to provide simple back ends for mobile applications.

• The API Gateway provides a way to implement REST APIs using Amazon Lambda.

• Furthermore, it is possible to have Amazon Lambda functions be called at regular intervals.

• As a reaction to a notification in Simple Notification Service (SNS), an Amazon Lambda function can be executed. As there are many services which can provide such notifications, this makes Amazon Lambda useable in many scenarios.

• DynamoDB is a database within the Amazon Cloud. In case of changes to the database it can call Lambda functions. So Lambda functions essentially become database triggers.

Evaluation for Nanoservices

Amazon Lambda enables the independent deployment of different functions without problems. They can also bring their own libraries along.

The technological expenditure for infrastructure is minimal when using this technology: A new version of an Amazon Lambda function can easily be deployed with a command line tool. Monitoring is also simple: the functions are immediately integrated into Cloud Watch. Cloud Watch is offered by Amazon to create metrics of Cloud applications and to consolidate and monitor log files. In addition, alarms can be defined based on these data that can be forwarded by SMS or email. Since all Amazon services can be contacted via an API, monitoring or deployment can be automated and integrated into their own infrastructures.

Amazon Lambda provides integration with the different Amazon services such as S3, Kinesis, and DynamoDB. It is also easily possible to contact an Amazon Lambda function via REST using the API Gateway. However, Amazon Lambda exacts that Node.js, Python, or Java are used. This profoundly limits the technology freedom.

Amazon Lambda offers an excellent isolation of functions. This is also necessary since the platform is used by many different users. It would not be acceptable for a Lambda function of one user to negatively influence the Lambda functions of other users.

Conclusion

Amazon Lambda enables you to implement extremely small services. The overhead for the individual services is very small. Independent deployment is easily possible. A Python, JavaScript, or Java function is the smallest deployment unit supported by Amazon Lambda—it is hardly possible to make them any smaller. Even if there is a multitude of Python, Java, or JavaScript functions, the expenditure for the deployments remains relatively low.

Amazon Lambda is a part of the Amazon ecosystem. Therefore, it can be supplemented by technologies like Amazon Elastic Beanstalk. There, microservices can run that can be larger and written in other languages. In addition, a combination with Elastic Computing Cloud (EC2) is possible. EC2 offers virtual machines on which any software can be installed. Moreover, there is a broad choice in regard to databases and other services that can be used with little additional effort. Amazon Lambda defines itself as a supplement of this tool kit. In the end one of the crucial advantages of the Amazon Cloud is that nearly every possible infrastructure is available and can easily be used. Thus developers can concentrate on the development of specific functionalities while most standard components can just be rented.


Try and Experiment

• There is a comprehensive tutorial2 that illustrates how to use Amazon Lambda. It does not only demonstrate simple scenarios, but it also shows how to use complex mechanisms such as different Node.js libraries, implementing REST services, or how to react to different events in the Amazon system. Amazon offers cost-free quotas of most services to new customers. In case of Lambda each customer gets such a large free quota that it is fully sufficient for tests and first getting to know the technology. Also note that the first million calls during a month are free. However, you should check the current pricing.3

2. http://aws.amazon.com/lambda/getting-started/

3. https://aws.amazon.com/lambda/pricing/


14.4 OSGi

OSGi4 is a standard with many different implementations.5 Embedded systems often use OSGi. Also the development environment Eclipse is based on OSGi, and many Java desktop applications use the Eclipse framework. OSGi defines a modularization within the JVM (Java Virtual Machine). Even though Java enables a division of code into classes or packages, there is no modular concept for larger units.

4. http://www.osgi.org/

5. http://en.wikipedia.org/wiki/OSGi#Current_framework_implementations

The OSGi Module System

OSGi supplements Java by such a module system. To do so OSGi introduces bundles into the Java world. Bundles are based on Java’s JAR files, which comprise code of multiple classes. Bundles have a number of additional entries in the file META-INF/MANIFEST.MF, which each JAR file should contain. These entries define which classes and interfaces the bundle exports. Other bundles can import these classes and interfaces. Therefore OSGi extends Java with a quite sophisticated module concept without inventing entirely new concepts.

Listing 14.1 OSGi MANIFEST.MF

Bundle-Name: A service
Bundle-SymbolicName: com.ewolff.service
Bundle-Description: A small service
Bundle-ManifestVersion: 2
Bundle-Version: 1.0.0
Bundle-Acltivator: com.ewolff.service.Activator
Export-Package: com.ewolff.service.interfaces;version="1.0.0"
Import-Package: com.ewolff.otherservice.interfaces;
version="1.3.0"

Listing 14.1 shows an example of a MANIFEST.MF file. It contains the description and name of the bundle and the bundle activator. This Java class is executed upon the start of the bundle and can initialize the bundle. Export-Package indicates which Java packages are provided by this bundle. All classes and interfaces of these packages are available to other bundles. Import-Package serves to import packages from another bundle. The packages can also be versioned.

In addition to interfaces and classes bundles can also export services. However, an entry in MANIFEST.MF is not sufficient for this. Code has to be written. Services are only Java objects in the end. Other bundles can import and use the services. Also calling the services happens in the code.

Bundles can be installed, started, stopped, and uninstalled at runtime. Therefore, bundles are easy to update: Stop and uninstall the old version, then install a new version and start. However, if a bundle exports classes or interfaces and another bundle uses these, an update is not so simple anymore. All bundles that use classes or interfaces of the old bundle and now want to use the newly installed bundle have to be restarted.

Handling Bundles in Practice

Sharing code is by far not as important for microservices as the use of services. Nevertheless at least the interface of the services has to be offered to other bundles.

In practice a procedure has been established where a bundle only exports the interface code of the service as classes and Java interfaces. Another bundle contains the implementation of the service. The classes of the implementation are not exported. The service implementation is exported as OSGi service. To use the service a bundle has to import the interface code from the one bundle and the service from the other bundle (see Figure 14.1).

Image

Figure 14.1 OSGi Service, Implementation, and Interface Code

OSGi enables restarting services. With the described approach the implementation of the service can be exchanged without having to restart other bundles. These bundles only import the Java interfaces and classes of the interface code. That code does not change for a new service implementation so that restarting is not necessary anymore. That way the access to services can be implemented in such a manner that the new version of the service is, in fact, used.

With the aid of OSGi blueprints6 or OSGi declarative services7 these details can be abstracted away when dealing with the OSGi service model. This facilitates the handling of OSGi. These technologies, for instance, render it much easier to handle the restart of a service or its temporary failure during the restart of a bundle.

6. https://osgi.org/download/r6/osgi.cmpn-6.0.0.pdf

7. https://osgi.org/download/r6/osgi.cmpn-6.0.0.pdf

An independent deployment of services is possible but also laborious since interface code and service implementation have to be contained in different bundles. This model allows only changes to the implementation. Modifications of the interface code are more complex. In such a case the bundles using a service have to be restarted because they have to reload the interface.

In reality OSGi systems are often completely reinstalled for these reasons instead of modifying individual bundles. An Eclipse update, for instance, often entails a restart. A complete reinstallation also facilitates the reproduction of the environment. When an OSGi system is dynamically changed, at some point it will be in a state that nobody is able to reproduce. However, modifying individual bundles is an essential prerequisite for implementing the nanoservice approach with OSGi. Independent deployment is an essential property of a nanoservice. OSGi compromises this essential property.

Evaluation for Nanoservices

OSGi has a positive effect on Java projects in regard to architecture. The bundles are usually relatively small so that the individual bundles are easy to understand. In addition, the split into bundles forces the developers and architects to think about the relationships between the bundles and to define them in the configurations of the bundles. Other dependencies between bundles are not possible within the system. Normally, this leads to a very clean architecture with clear and intended dependencies.

However, OSGi does not offer technological freedom: It is based on the JVM and therefore can only be used with Java or JVM-based languages. For example, it is nearly impossible that an OSGi bundle brings along its own database because databases are normally not written in Java. For such cases additional solutions alongside the OSGi infrastructure have to be found.

For some Java technologies an integration with OSGi is difficult since loading Java classes works differently without OSGi. Moreover, many popular Java application servers do not support OSGi for deployed applications so that changing code at runtime is not supported in such environments. The infrastructure has to be specially adapted for OSGi.

Furthermore, the bundles are not fully isolated: When a bundle uses a lot of CPU or causes the JVM to crash, the other bundles in the same JVM will be affected. Failures can occur, for instance, due to a memory leak, which causes more and more memory to be allocated due to an error until the system breaks down. Such errors easily arise due to blunders.

On the other hand, the bundles can locally communicate due to OSGi. Distributed communication is also possible with different protocols. Moreover, the bundles share a JVM, which reduces, for instance, the memory utilization.

Solutions for monitoring are likewise present in the different OSGi implementations.

Conclusion

OSGi leads, first of all, to restrictions in regard to technological freedom. It restricts the project to Java technologies. In practice the independent deployment of the bundles is hard to implement. Interface changes are especially poorly supported. Besides bundles are not well isolated from each other. On the other hand, bundles can easily interact via local calls.


Try and Experiment

• Get familiar with OSGi with, for instance, the aid of a tutorial.8

8. http://www.vogella.com/tutorials/OSGi/article.html

• Create a concept for the distribution into bundles and services for a part of a system you know.

• If you had to implement the system with OSGi, which additional technologies (databases etc.) would you have to use? How would you handle this?


14.5 Java EE

Java EE9 is a standard from the Java field. It comprises different APIs such as JSF (Java ServerFaces), Servlet, and JSP (Java Server Pages) for web applications; JPA (Java Persistence API) for persistence; or JTA (Java Transaction API) for transactions. Additionally, Java EE defines a deployment model. Web applications can be packaged into WAR files (Web ARchive), JAR files (Java ARchive) can contain logic components like Enterprise Java Beans (EJBs), and EARs (Enterprise ARchives) can comprise a collection of JARs and WARs. All these components are deployed in one application server. The application server implements the Java EE APIs and offers, for instance, support for HTTP, threads, and network connections and also support for accessing databases.

9. http://www.oracle.com/technetwork/java/javaee/overview/index.html

This section deals with WARs and the deployment model of Java EE application servers. Chapter 13, “Example of a Microservice-Based Architecture,” already described in detail a Java system that does not require an application server. Instead it directly starts a Java application on the Java Virtual Machine (JVM). The application is packaged in a JAR file and contains the entire infrastructure. This deployment is called Fat JAR deployment, because the application, including the entire infrastructure, is contained in one single JAR. The example from Chapter 13 uses Spring Boot, which also supports a number of Java EE APIs such as JAX-RS for REST. Dropwizard10 also offers such a JAR model. It is actually focused on JAX RS-based REST web services; however, it can also support other applications. Wildfly Swarm11 is a variant of the Java EE server Wildfly, which also supports such a deployment model.

10. https://dropwizard.github.io/dropwizard/

11. http://github.com/wildfly-swarm/

Nanoservices with Java EE

A Fat JAR deployment utilizes too many resources for nanoservices. In a Java EE application server, multiple WARs can be deployed, thereby saving resources. Each WAR can be accessed via its own URL. Furthermore, each WAR can be individually deployed. This enables bringing each nanoservice individually into production.

However, the separation between WARs is not optimal:

• Memory and CPU are collectively used by all nanoservices. When a nanoservice uses a lot of CPU or memory, this can interfere with other nanoservices. A crash of one nanoservice propagates to all other nanoservices.

• In practice, redeployment of a WAR causes memory leaks if it is not possible to remove the entire application from memory. Therefore, in practice the independent deployment of individual nanoservices is hard to achieve.

• In contrast to OSGi the ClassLoaders of the WARs are completely separate. There is no possibility for accessing the code of other nanoservices.

• Because of the separation of the code, WARs can only communicate via HTTP or REST. Local method calls are not possible.

Since multiple nanoservices share an application server and a JVM, this solution is more efficient than the Fat JAR deployment of individual microservices in their own JVM as described in Chapter 13. The nanoservices use a shared heap and therefore use less memory. However, scaling works only by starting more application servers. Each of the application servers contains all nanoservices. All nanoservices have to be scaled collectively. It is not possible to scale individual nanoservices.

The technology choice is restricted to JVM technologies. Besides all technologies are excluded that do not work with the servlet model, such as Vert.x (section 14.6) or Play.

Microservices with Java EE?

For microservices Java EE can also be an option: Theoretically it would be possible to run each microservice in its own application server. In this case an application server has to be installed and configured in addition to the application. The version of the application server and its configuration have to fit to the version of the application. For Fat JAR deployment there is no need for a specific configuration of the application server because it is part of the Fat JAR and therefore configured just like the application. This additional complexity of the application server is not counterbalanced by any advantage. Since deployment and monitoring of the application server only work for Java applications, these features can only be used in a microservices-based architecture when the technology choice is restricted to Java technologies. In general, application servers have hardly any advantages12—especially for microservices.

12. http://jaxenter.com/java-application-servers-dead-1-111928.html

An Example

The application from Chapter 13 is also available with the Java EE deployment model.13 Figure 14.2 provides an overview of the example: There are three WARs, which comprise “Order,” “Customer,” and “Catalog.” They communicate with each other via REST. When “Customer” fails, “Order” would also fail in the host since “Order” communicates only with this single “Customer” instance. To achieve better availability, the access would have to be rerouted to other “Customer” instances.

13. https://github.com/ewolff/war-demo

Image

Figure 14.2 Example Application with Java EE Nanoservices

A customer can use the UI of the nanoservices from the outside via HTML/HTTP. The code contains only small modifications compared to the solution from Chapter 13. The Netflix libraries have been removed. On the other hand, the application has been extended with support for servlet containers.


Try and Experiment

The application as Java EE nanoservices can be found on GitHub.14

14. https://github.com/ewolff/javaee-example/

The application does not use the Netflix technologies.

• Hystrix offers resilience (see section 13.10).

Does it make sense to integrate Hystrix into the application?

How are the nanoservices isolated from each other?

Is Hystrix always helpful?

Compare also section 9.5 concerning stability and resilience. How can these patterns be implemented in this application?

Eureka is helpful for service discovery. How would it fit into the Java EE nanoservices?

How can other service discovery technologies be integrated (see section 7.11)?

Ribbon for load balancing between REST services could likewise be integrated. Which advantages would that have? Would it also be possible to use Ribbon without Eureka?


14.6 Vert.x

Vert.x15 is a framework containing numerous interesting approaches. Although it runs on the Java Virtual Machine, it supports many different programming languages—such as Java, Scala, Clojure, Groovy, and Ceylon as well as JavaScript, Ruby, or Python. A Vert.x system is built from Verticles. They receive events and can return messages.

15. http://vertx.io/

Listing 14.2 shows a simple Vert.x Verticle, which only returns the incoming messages. The code creates a server. When a client connects to the server, a callback is called, and the server creates a pump. The pump serves to transfer data from a source to a target. In the example source and target are identical.

The application only becomes active when a client connects, and the callback is called. Likewise, the pump only becomes active when new data are available from the client. Such events are processed by the event loop, which calls the Verticles. The Verticles then have to process the events. An event loop is a thread. Usually one event loop is started per CPU core so that the event loops are processed in parallel. An event loop and thus a thread running on a single CPU core can support an arbitrary number of network connections. Events of all connections can be processed in a single event loop. Therefore, Vert.x is also suitable for applications that have to handle a large number of network connections.

Listing 14.2 Simple Java Vert.x Echo Verticle

public class EchoServer extends Verticle {

  public void start() {
    vertx.createNetServer().connectHandler(new Handler() {
      public void handle(final NetSocket socket) {
        Pump.createPump(socket, socket).start();
      }
    }).listen(1234);
  }
}

As described Vert.x supports different programming languages. Listing 14.3 shows the same Echo Verticle in JavaScript. The code adheres to JavaScript conventions and uses, for instance, a JavaScript function for callback. Vert.x has a layer for each programming language that adapts the basic functionality in such a way that it seems like a native library for the respective programming language.

Listing 14.3 Simple JavaScript Vert.x Echo Verticle

var vertx = require('vertx')

vertx.createNetServer().connectHandler(function(sock) {
  new vertx.Pump(sock, sock).start();
}).listen(1234);

Vert.x modules can contain multiple Verticles in different languages. Verticles and modules can communicate with each other via an event bus. The messages on the event bus use JSON as data format. The event bus can be distributed onto multiple servers. In this manner Vert.x supports distribution and can implement high availability by starting modules on other servers. Besides the Verticles and modules are loosely coupled since they only exchange messages. Vert.x also offers support for other messaging systems and can also communicate with HTTP and REST. Therefore, it is relatively easy to integrate Vert.x systems into microservice-based systems.

Modules can be individually deployed and also removed again. Since the modules communicate with each other via events, modules can easily be replaced by new modules at runtime. They only have to process the same messages. A module can implement a nanoservice. Modules can be started in new nodes so that the failure of a JVM can be compensated.

Vert.x also supports Fat JARs where the application brings all necessary libraries along. This is useful for microservices since this means that the application brings all dependencies along and is easier to deploy. For nanoservices this approach is not so useful because the approach consumes too many resources—deploying multiple Vert.x modules in one JVM is a better option for nanoservices.

Conclusion

Via the independent module deployment and the loose coupling by the event bus Vert.x supports multiple nanoservices within a JVM. However, a crash of the JVM, a memory leak, or blocking the event loop would affect all modules and Verticles in the JVM. On the other hand, Vert.x supports many different programming languages—in spite of the restriction to JVM. This is not only a theoretical option. In fact, Vert.x aims at being easily useable in all supported languages. Vert.x presumes that the entire application is written in a nonblocking manner. However, there is the possibility to execute blocking tasks in Worker Verticles. They use separate thread pools so that they do not influence the nonblocking Verticles. Therefore even code that does not support the Vert.x nonblocking approach can still be used in a Vert.x system. This enables even greater technological freedom.


Try and Experiment

The Vert.x homepage16 offers an easy start to developing with Vert.x. It demonstrates how a web server can be implemented and executed with different programming languages. The modules in the example use Java and Maven.17 There are also complex examples in other programming languages.18

16. http://vertx.io/

17. https://github.com/vert-x3/vertx-examples/tree/master/maven-simplest

18. https://github.com/vert-x/vertx-examples


14.7 Erlang

Erlang19 is a functional programming language that is, first of all, used in combination with the Open Telecom Platform (OTP) framework. Originally, Erlang was developed for telecommunication. In this field applications have to be very reliable. Meanwhile Erlang is employed in all areas that profit from its strengths. Erlang uses a virtual machine similar to Java as a runtime environment, which is called BEAM (Bogdan/ Björn’s Erlang Abstract Machine).

19. http://www.erlang.org/

Erlang’s strengths are, first of all, its resilience against failures and the possibility to let systems run for years. This is only possible via dynamic software updates. At the same time, Erlang has a lightweight concept for parallelism. Erlang uses the concept of processes for parallel computing. These processes are not related to operating system processes and are even more lightweight than operating system threads. In an Erlang system millions of processes can run that are all isolated from each other.

Another factor contributing to the isolation is the asynchronous communication. Processes in an Erlang system communicate with each other via messages. Messages are sent to the mailbox of a process (see Figure 14.3). In one process only one message is processed at a time. This facilitates the handling of parallelism: there is parallel execution because many messages can be handled at the same time. But each process takes care of only one message at a time. Parallelism is achieved because there are multiple processes. The functional approach of the language, which attempts to get by without a state, fits well to this model. This approach corresponds to the Verticles in Vert.x and their communication via the event bus.

Image

Figure 14.3 Communication between Erlang Processes

Listing 14.4 shows a simple Erlang server that returns the received message. It is defined in its own module. The module exports the function loop, which does not have any parameters. The function receives a message Msg from a node From and then returns the same message to this node. The operator “!” serves for sending the message. Afterwards the function is called again and waits for the next message. Exactly the same code can also be used for being called by another computer via the network. Local messages and messages via the network are processed by the same mechanisms.

Listing 14.4 An Erlang Echo Server

module(server).
-export([loop/0]).
loop() ->
    receive
      {From, Msg} ->
        From ! Msg,
        loop()
end.

Due to the sending of messages, Erlang systems are especially robust. Erlang makes use of “Let It Crash.” An individual process is just restarted when problems occur. This is the responsibility of the supervisor, a process that is specifically dedicated to monitoring other processes and restarting them if necessary. The supervisor itself is also monitored and restarted in case of problems. This way a tree is created in Erlang that in the end prepares the system in case processes should fail (see Figure 14.4).

Image

Figure 14.4 Monitoring in Erlang Systems

Since the Erlang process model is so lightweight, restarting a process is done rapidly. When the state is stored in other components, there will also be no information loss. The remainder of the system is not affected by the failure of the process: As the communication is asynchronous, the other processes can handle the higher latency caused by the restart. In practice this approach has proven very reliable. Erlang systems are very robust and still easy to develop.

This approach is based on the actor model:20 Actors communicate with each other via asynchronous messages. As a response they can themselves send messages, start new actors, or change their behavior for the next messages. Erlang’s processes correspond to actors.

20. http://en.wikipedia.org/wiki/Actor_model

In addition, there are easy possibilities to monitor Erlang systems. Erlang itself has built-in functions that can monitor memory utilization or the state of the mailboxes. OTP offers for this purpose the operations and maintenance support (OAM), which can, for instance, also be integrated into SNMP systems.

Since Erlang solves typical problems arising upon the implementation of micro-services like resilience, it supports the implementation of microservices21 quite well. In that case a microservice is a system written in Erlang that internally consists of multiple processes.

21. https://www.innoq.com/en/talks/2015/01/talk-microservices-erlang-otp/

However, the services can also get smaller; each process in an Erlang system could be considered as a nanoservice. It can be deployed independently of the others, even during runtime. Furthermore, Erlang supports operating system processes. In that case they are also integrated into the supervisor hierarchy and restarted in case of a breakdown. This means that any operating system process written in any language might become a part of an Erlang system and its architecture.

Evaluation for Nanoservices

As discussed an individual process in Erlang can be viewed as a nanoservice. The expenditure for the infrastructure is relatively small in that case: Monitoring is possible with built-in Erlang functions. The same is true for deployment. Since the processes share a BEAM instance, the overhead for a single process is not very high. In addition, it is possible for the processes to exchange messages without having to communicate via the network and therefore with little overhead. The isolation of processes is also implemented.

Finally, even processes in other languages can be added to an Erlang system. For this purpose, an operating system process that can be implemented in an arbitrary language is put under the control of Erlang. The operating system process can, for instance, be safeguarded by “Let It Crash.” This enables integration of practically all technologies into Erlang—even if they run in a separate process.

On the other hand, Erlang is not very common. The consequent functional approach also needs getting used to. Finally, the Erlang syntax is not very intuitive for many developers.


Try and Experiment

• A very simple example22 is based on the code from this section and demonstrates how communication between nodes is possible. You can use it to get a basic understanding of Erlang.

22. https://github.com/ewolff/erlang-example/

• There is a very nice tutorial23 for Erlang, which also treats deployment and operation. With the aid of the information from the tutorial the example24 can be supplemented by a supervisor.

23. http://learnyousomeerlang.com/

24. https://github.com/ewolff/erlang-example/

• An alternative language out of the Erlang ecosystem is Elixir.25 Elixir has a different syntax but also profits from the concepts of OTP. Elixir is much simpler to learn than Erlang and thus lends itself to a first start.

25. http://elixir-lang.org/

• There are many other implementations of the actor model.26 It is worthwhile to look more closely before deciding whether such technologies are also useful for the implementation of microservices or nanoservices and which advantages might be associated. Akka from the Scala/Java area might be of interest here.

26. http://en.wikipedia.org/wiki/Actor_model


14.8 Seneca

Seneca27 is based on Node.js and accordingly uses JavaScript on the server. Node.js has a programming model where one operating system process can take care of many tasks in parallel. To achieve this there is an event loop that handles the events. When a message enters the system via a network connection, the system will first wait until the event loop is free. Then the event loop processes the message. The processing has to be fast since the loop is blocked, otherwise resulting in long waiting times for all other messages. For this reason, the response of other servers may in no case be waited for in the event loop. That would block the system for too long. The interaction with other systems has to be implemented in such a way that the interaction is only initiated. Then the event loop is freed to handle other events. Only when the response of the other system arrives is it processed by the event loop. Then the event loop calls a callback that has been registered upon the initiation of the interaction. This model is similar to the approaches used by Vert.x and Erlang.

27. http://senecajs.org/

Seneca introduces a mechanism in Node.js that enables processing of commands. Patterns of commands are defined that cause certain code to be executed.

Communicating via such commands is also easy to do via the network. Listing 14.5 shows a server that calls seneca.add(). Thereby a new pattern and code for handling events with this pattern are defined. To the command with the component cmd: "echo" a function reacts. It reads out the value from the command and puts it into the value parameter of the function callback. Then the function callback is called. With seneca.listen() the server is started and listens to commands from the network.

Listing 14.5 Seneca Server

var seneca = require("seneca")()

seneca.add( {cmd: "echo"}, function(args,callback){
    callback(null,{value:args.value})
})

seneca.listen()

The client in Listing 14.6 sends all commands that cannot be processed locally via the network to the server. seneca.client(). seneca.act() creates the commands that are sent to the server. It contains cmd: "echo"—therefore the function of the server in Listing 14.5 is called. "echo this" is used as the value. The server returns this string to the function that was passed in as a callback—and in this way it is finally printed on the console. The example code can be found on GitHub.28

28. https://github.com/ewolff/seneca-example/

Listing 14.6 Seneca Client

var seneca=require("seneca")()

seneca.client()

seneca.act('cmd: "echo",value:"echo this", function(err,result){
    console.log( result.value )
})

Therefore, it is very easy to implement a distributed system with Seneca. However, the services do not use a standard protocol like REST for communicating. Nevertheless, REST systems also can be implemented with Seneca. Besides the Seneca protocol is based on JSON and therefore can also be used by other languages.

A nanoservice can be a function that reacts with Seneca to calls from the network—and therefore it can be very small. As already described, a Node.js system as implemented with Seneca is fragile when a function blocks the event loop. Therefore, the isolation is not very good.

For the monitoring of a Seneca application there is an admin console that at least offers a simple monitoring. However, in each case it is only available for one Node.js process. Monitoring across all servers has to be achieved by different means.

An independent deployment of a single Seneca function is only possible if there is a single Node.js process for the Seneca function. This represents a profound limitation for independent deployment since the expenditure of a Node.js process is hardly acceptable for a single JavaScript function. In addition, it is not easy to integrate other technologies into a Seneca system. In the end the entire Seneca system has to be implemented in JavaScript.

Evaluation for Nanoservices

Seneca has been especially developed for the implementation of microservices with JavaScript. In fact, it enables a very simple implementation for services that can also be contacted via the network. The basic architecture is similar to Erlang: In both approaches services send messages or. commands to each other to which functions react. In regard to the independent deployment of individual services, the isolation of services from each other and the integration of other technologies, Erlang is clearly superior. Besides, Erlang has a much longer history and has long been employed in different very demanding applications.


Try and Experiment

The code example29 can be a first step to get familiar with Seneca. You can also use the basic tutorial.30 In addition, it is worthwhile to look at other examples.31 The nanoservice example can be enlarged to a comprehensive application or can be distributed to a larger number of Node.js processes.

29. https://github.com/ewolff/seneca-example/

30. http://senecajs.org/getting-started/

31. https://github.com/rjrodger/seneca-examples/


14.9 Conclusion

The technologies presented in this chapter show how microservices can also be implemented very differently. Since the difference is so large, the use of the separate term “nanoservice” appears justified. Nanoservices are not necessarily independent processes anymore that can only be contacted via the network but might run together in one process and use local communication mechanisms to contact each other. Thereby not only the use of extremely small services is possible, but also the adoption of microservice approaches in areas such as embedded or desktop applications.

An overview of the advantages and disadvantages of different technologies in regard to nanoservices is provided in Table 14.1. Erlang is the most interesting technology since it also enables the integration of other technologies and is able to isolate the individual nanoservices quite well from each other so that a problem in one nanoservice will not trigger the failure of the other services. In addition, Erlang has been the basis of many important systems for a long time already so that the technology as such has proven its reliability beyond doubt.

Image

Table 14.1 Technology Evaluation for Nanoservices

Seneca follows a similar approach, but cannot compete with other technologies in terms of isolation and the integration of other technologies than JavaScript. Vert.x has a similar approach on the JVM and supports numerous languages. However, it does not isolate nanoservices as well as Erlang. Java EE does not allow for communication without a network, and individual deployment is difficult in Java EE. In practice memory leaks occur frequently during the deployment of WARs. Therefore, during a deployment the application server is usually restarted to avoid memory leaks. Then all nanoservices are unavailable for some time. Therefore, a nanoservice cannot be deployed without influencing the other nanoservices. OSGi enables the shared use of code between nanoservices, in contrast to Java EE. In addition, OSGi uses method calls for communication between services and not commands or messages like Erlang and Seneca. Commands or messages have the advantage of being more flexible. Parts of a message that a certain service does not understand are not a problem; they can just be ignored.

Amazon Lambda is especially interesting since it is integrated into the Amazon ecosystem. This makes handling the infrastructure very easy. The infrastructure can be a challenging problem in case of small nanoservices because so many more environments are needed due to the high number of services. With Amazon a database server is only an API call or a click away—alternatively, an API can be used to store data instead of a server. Servers become invisible for storing data—and this is also the case with Amazon Lambda for executing code. There is no infrastructure for an individual service but only code that is executed and can be used by other services. Because of the prepared infrastructure monitoring is also no challenge anymore.

Essential Points

• Nanoservices divide systems into even smaller services. To achieve this, they compromise in certain areas such as technology freedom or isolation.

• Nanoservices require efficient infrastructures that can handle a large number of small nanoservices.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset