Sharing Data with Containers

"Do one thing at a time and do it well," has been one of the successful mantras in the Information Technology (IT) sector for quite a long time now. This widely used tenet fits nicely to build and expose Docker containers too, and it is being prescribed as one of the best practices to avail the originally envisaged benefits of the Docker-inspired containerization paradigm. This means that, we must inscribe a single application along with its direct dependencies and libraries inside a Docker container in order to ensure the container's independence, self-sufficiency, horizontal scalability, and maneuverability. Let's see why containers are that important:

  • The temporal nature of containers: The container typically lives as long as the application lives and vice versa. However, this has some negative implications for the application data. Applications naturally go through a variety of changes in order to accommodate both business as well as technical changes, even in their production environments. There are other causes, such as application malfunctions, version changes, and application maintenance, for software applications to be consistently and constantly updated and upgraded. In the case of a general-purpose computing model, even when an application dies for any reason, the persistent data associated with this application can be preserved in the filesystem. However, in the case of the container paradigm, the application upgrades are usually performed by systematically crafting a new container with the newer version of the application by simply discarding the old one. Similarly, when an application malfunctions, a new container needs to be launched and the old one has to be discarded. To sum it up, containers are typically temporal in nature.
  • The need for a business continuity: In the container landscape, the complete execution environment, including its data files, is usually bundled and encapsulated inside the container. For any reason, when a container gets discarded, the application data files also perish along with the container. However, in order to provide software applications without any interruption and disruption of service, these application data files must be preserved outside the container and passed on to the container on a need basis in order to ensure business continuity. This means that the resiliency and reliability of containers need to be guaranteed. Besides, some application data files, such as the log files, needs to be collected and accessed outside the container for various posterior analyses. The Docker technology addresses this file persistence issue very innovatively through a new building block called data volume.

The Docker technology has three different ways of providing persistent storage:

  • The first and recommended approach is to use volumes that are created using Docker's volume management.
  • The second method is to mount a directory from the Docker host to a specified location inside the container.
  • The other alternative is to use a data-only container. The data-only container is a specially crafted container that is used to share data with one or more containers.

In this chapter, we will cover the following topics:

  • Data volume
  • Sharing host data
  • Sharing data between containers
  • The avoidable common pitfalls
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset