Distinguishing Docker containers

Precisely speaking, Docker containers wrap a piece of software in a complete filesystem that contains everything that is needed to run: source code, runtime, system tools, and system libraries (anything that can be installed on a server). This guarantees that the software will always run the same, regardless of its operating environment.

The main motivations of Docker-enabled containerization are as follows:

  • Containers running on a single machine share the same operating system kernel. They start instantly and use less RAM. Container images are constructed from layered filesystems and share common files, making disk usage and image downloads much more efficient.
  • Docker containers are based on open standards. This standardization enables containers to run on all major Linux distributions and other operating systems such as Microsoft Windows and Apple Macintosh.

There are several benefits being associated with Docker containers, as listed here:

  • Efficiency: As mentioned earlier, there can be multiple containers on a single machine leveraging the same kernel so they are lightweight, can start instantly, and make more efficient use of RAM.
    • Resource sharing: This among workloads allows greater efficiency compared to the use of dedicated and single-purpose equipment. This sharing enhances the utilization rate of resources.
    • Resource partitioning: This ensures that resources are appropriately segmented in order to meet the system requirements of each workload. Another objective for this partitioning is to prevent any kind of untoward interactions among workloads.
    • Resource as a Service (RaaS): Various resources can be individually and collectively chosen, provisioned, and given to applications directly or to users to run applications.
  • Native performance: Containers guarantee higher performance due to their lightweight nature and less wastage.
  • Portability: Applications, dependencies, and configura­tions are all bundled together in a complete filesystem, ensuring applications work seamlessly in any environ­ment (VMs, bare metal servers, local or remote, generalized or specialized machines, and so on). The main advantage of this portability is that it is possible to change the runtime dependencies (even programming language) between deployments.

The following diagram illustrates how containers are being moved and swapped across multiple hosts:

  • Real-time scalability: Any number of fresh containers can be provisioned in a few seconds in order to handle the user and data loads. On the reverse side, additionally provisioned containers can be knocked down when the demand goes down. This ensures higher throughput and capacity on demand. Tools such as Docker Swarm, Kubernetes, and Apache Mesos further simplify elastic scaling.
  • High availability: By running with multiple containers, redundancy can be built into the application. If one container fails, then the surviving peers—which are providing the same capability—continue to provide service. With orchestration, failed containers can be automatically recreated (rescheduled) either on the same or a different host, restoring full capacity and redundancy.
  • Maneuverability: Applications running in Docker containers can be easily modified, updated, or extended without impacting other containers in the host.
  • Flexibility: Developers are free to use the pro­gramming languages and development tools they prefer.
  • Clusterability: Containers can be clustered for specific purposes on demand and there are integrated management platforms for cluster-enablement and management.
  • Composability: Software services hosted in containers can be discovered, matched for, and linked to form business-critical, process-aware, and composite services.
  • Security: Containers isolate applications from one another and the underlying infrastructure by providing an additional layer of protection for the application.
  • Predictability: With immutable images, the image always exhibits the same behavior everywhere because the code is contained in the image. This means a lot in terms of deployment and in the management of the application life cycle.
  • Repeatability: With Docker, one can build an image, test that image, and then use that same image in production.
  • Replicability: With containers, it is easy to instantiate identical copies of full application stack and configuration. These can then be used by new hires, partners, support teams, and others to safely experiment in isolation.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset